ECS Fargate:
AWS ECS (Elastic Container Service) Fargate is a serverless compute engine offered by Amazon Web Services (AWS) for deploying and managing containerized applications. It simplifies the process of running containers at scale without the need to manage the underlying infrastructure. With ECS Fargate, you can focus on your application logic and let AWS handle the provisioning, scaling, and maintenance of the compute resources.
...
3- Application:
ECS Task Definition: A Task Definition is a blueprint for your containers. It defines various parameters like which Docker images to use, CPU and memory requirements, networking settings, and container relationships.
...
You would create separate task definitions for each component of your application (Customapi, UI, Jira, devops-processor, and MongoDB).
ECS Service:
...
An ECS Service is responsible for maintaining a specified number of running instances of a task definition.
...
For each component of your application, you would create an ECS service to ensure that the desired number of containers/tasks are always running.
CloudWatch:
...
Amazon CloudWatch is a monitoring and observability service that collects and tracks metrics, logs, and events from various AWS resources.
...
You would configure CloudWatch to monitor the performance and health of your ECS tasks, services, and other resources.
NFS (Network File System):
...
NFS is a distributed file system protocol that allows you to share files and directories between servers over a network.
...
You might use NFS to provide a persistent storage solution for your MongoDB data, enabling data to be retained even if containers are restarted or scaled.
IAM Role & Policy:
...
An IAM Role is an AWS identity that you can assign to AWS resources. It grants permissions to access and interact with other AWS resources.
An IAM Policy defines permissions that determine what actions are allowed or denied for specific resources.
...
You would create IAM roles and policies to grant necessary permissions to your ECS tasks and services, enabling them to access other AWS resources securely.
By setting up these components in the application layer, you establish a comprehensive environment for your containerized application. The ECS Task Definitions and Services define how your application's containers are configured and deployed. CloudWatch monitors the performance, and NFS provides persistent storage for your database. Finally, IAM Roles and Policies ensure that your application components can interact with other AWS services securely and efficiently.
How do we run terraform script to install knowhow on ECS from scratch:
Terraform Script Repo URL: https://pscode.lioncloud.net/psinnersource/monitor-measure-metrics/speedy-product/knowhow-terraform-scripts/-/tree/main/ecs_fargate
Step 1 : -Clone the Terraform Repo And Run below command for 1-Infrastructure
Code Block |
---|
git clone https://pscode.lioncloud.net/psinnersource/monitor-measure-metrics/speedy-product/knowhow-terraform-scripts.git
cd ecs_fargate/1-Infrastructure
terraform init
terraform apply -auto-approve |
...
How do we run terraform script to install knowhow on ECS with existing services:
Commenting Existing Resource Blocks:
When you're working with existing resources, you generally wouldn't want to recreate them using Terraformor use existing resources, as that could lead to unintended changes or data loss . Instead, you can import the existing resources into your Terraform state.
For example, if you have an existing AWS S3 bucketECS Cluster, you would typically comment out the resource block for the bucket ECS Cluster in your Terraform configuration. This prevents Terraform from trying to manage the resource.
Here's an example of what an S3 bucket AWS ECS resource block might look like in Terraform and by adding “#“ lines will be commented as shown bellow:
Code Block |
---|
# resource#resource "aws_s3ecs_bucketcluster" "example_bucketPSKnowHOW-Cluster" { # # bucketname = "example-bucket"var.ecs_cluster_name # } |
Import Existing Resources:
To start managing existing resources with Terraform, you need to import them into the Terraform state. This is done using the terraform import
command. The command requires the resource type and name from your configuration, along with the actual resource identifier from the cloud provider.
For example, to import an existing S3 bucket VPC into Terraform state:
Code Block |
---|
terraform import aws_ |
...
aws_ |
...
vpc.example_ |
...
vpc example- |
...
vpc |
Using Output to Share Resource Information:
Once you've imported the existing resource into Terraform state, you can utilize the information about that resource by using outputs. Outputs allow you to share information from your Terraform configuration with other parts of your infrastructure or external services.
In your output.tf
file, you can define an output for the imported S3 bucket's VPC ID:
Code Block |
---|
output "imported_bucketVPC_id" { value = aws_s3_bucketvpc.example_bucketvpc.id } |
Now, any other Terraform configurations or external systems can reference this output value to interact with the existing S3 bucketVPC.
Remember, while importing existing resources into Terraform can be convenient, it's important to plan carefully to avoid unintended changes or conflicts between your existing resources and your Terraform-managed infrastructure.
...
Commenting out the existing resource block in your Terraform configuration.
Importing the existing resource into Terraform state using the terraform import command.
Defining an output in your http:// output.tf file to share the resource's information with other parts of your infrastructure.
The PSknowhow application is composed of seven containers that may be deployed on a ECS cluster. This document will guide you through the installation process step by step. The containers are:
customapi with openjdk:8-jre-slim-stretch as base image which handles api request, which runs on 8080 port.
ui with nginx:1.22.0-alpine-perl base image, which runs on port 80 & 443.
mongodb with mongo:4.4.1-bionic as base image, which runs on port 27017.
Jira-processor with openjdk:8-jre-slim-stretch base image which is a jira collector.
devops-processor with openjdk:8-jre-slim-stretch base image which collects jenkins, github, gitlab, bamboo, bitbucket, zephyr, sonar, teamcity.
azure-board-processor with openjdk:8-jre-slim-stretch base image and which collects azure board.
azure-pipeline-repo which collects azure pipeline and azure repo.
The ui container should run on a LoadBalancer server with port 443. The remaining six containers should run on the default service i.e ClusterIP.
Resource requirement:
Customapi : 8GB RAM & 2 CPU
MongoDB : 2GB & 1CPU
UI : 1GB RAM & 1CPU
Jira-processor: 6GB RAM & 1CPU
devops-Processor: 8GB RAM & 2 CPU
Azure-board-Processor: 4GB RAM & 1CPU
Azure-pipeline-repo-Processor: 4GB RAM & 1CPU
Environmental Variable’s for MongoDB:
MONGODB_ADMIN_USER=<DB ROOT USER>
MONGODB_ADMIN_PASS=<DB ROOT PASSWORD>
MONGODB_APPLICATION_DATABASE=kpidashboard
MONGODB_APPLICATION_USER=<DB APPLICATION USER>
MONGODB_APPLICATION_PASS=<DB APPLICATION PASSWORD>