ECS Fargate:
AWS ECS (Elastic Container Service) Fargate is a serverless compute engine offered by Amazon Web Services (AWS) for deploying and managing containerized applications. It simplifies the process of running containers at scale without the need to manage the underlying infrastructure. With ECS Fargate, you can focus on your application logic and let AWS handle the provisioning, scaling, and maintenance of the compute resources.
To use ECS Fargate, you create task definitions that specify the containers, their configurations, and resource requirements. These task definitions are then used to launch tasks, which represent running instances of your containers. You can also define services to ensure high availability and manage the lifecycle of tasks.
All AWS services required to run PSknowhow in ECS:
1- Infrastructure:
VPC (Virtual Private Cloud): A logically isolated section of the AWS cloud where you can launch AWS resources. It allows you to define your own network configuration, including IP address ranges, subnets, and route tables.
Subnets: These are subdivisions of a VPC, used to segment and isolate resources. Two subnets are often created in different Availability Zones for high availability.
Internet Gateway: A VPC component that allows communication between instances in the VPC and the Internet. It serves as a gateway for traffic entering or leaving the VPC.
Route Table: A set of rules that determine where network traffic is directed within the VPC. It specifies how traffic is routed between subnets, the Internet Gateway, and other destinations.
Route Table Association: Associates a subnet with a route table, enabling the subnet to use the routes defined in that table.
2- Platform:
ECS Cluster: A logical grouping of container instances that you can manage as a single unit. It allows you to organize and manage containers effectively.
ALB (Application Load Balancer): A load balancer that distributes incoming application traffic across multiple targets (such as EC2 instances, containers, IP addresses) in multiple Availability Zones. It operates at the application layer (Layer 7) of the OSI model.
NLB (Network Load Balancer): A load balancer that routes traffic based on IP protocol data. It is ideal for handling TCP/UDP traffic and performs at the transport layer (Layer 4) of the OSI model.
ALB Listener: A listener is a process that checks for connection requests and forwards them to the appropriate target groups based on rules you define.
ALB Listener Rules: Rules that define how traffic should be routed based on conditions such as URL paths or hostnames. They help control the flow of incoming requests.
Target Group: A group of resources, such as EC2 instances or containers, that serve traffic together. It is associated with a listener and routes traffic to the registered targets based on the listener rules.
Security Group: A virtual firewall that controls inbound and outbound traffic for your resources. It acts as a barrier that specifies allowed communication based on defined rules.
3- Application:
ECS Task Definition: A Task Definition is a blueprint for your containers. It defines various parameters like which Docker images to use, CPU and memory requirements, networking settings, and container relationships. You would create separate task definitions for each component of your application (Customapi, UI, Jira, devops-processor, and MongoDB).
ECS Service: An ECS Service is responsible for maintaining a specified number of running instances of a task definition. For each component of your application, you would create an ECS service to ensure that the desired number of containers/tasks are always running.
CloudWatch: Amazon CloudWatch is a monitoring and observability service that collects and tracks metrics, logs, and events from various AWS resources.You would configure CloudWatch to monitor the performance and health of your ECS tasks, services, and other resources.
NFS (Network File System):NFS is a distributed file system protocol that allows you to share files and directories between servers over a network.You might use NFS to provide a persistent storage solution for your MongoDB data, enabling data to be retained even if containers are restarted or scaled.
IAM Role & Policy:An IAM Role is an AWS identity that you can assign to AWS resources. It grants permissions to access and interact with other AWS resources.
An IAM Policy defines permissions that determine what actions are allowed or denied for specific resources.You would create IAM roles and policies to grant necessary permissions to your ECS tasks and services, enabling them to access other AWS resources securely.
By setting up these components in the application layer, you establish a comprehensive environment for your containerized application. The ECS Task Definitions and Services define how your application's containers are configured and deployed. CloudWatch monitors the performance, and NFS provides persistent storage for your database. Finally, IAM Roles and Policies ensure that your application components can interact with other AWS services securely and efficiently.
How do we run terraform script to install knowhow on ECS from scratch:
Terraform Script -
Step 1 - Run below command for 1-Infrastructure
cd ecs_fargate/1-Infrastructure terraform init terraform apply -auto-approve
Step 2 - Run below command for 2-Platform
cd ../2-Platform ##Replace your SSL_certificate_arn at line 122 in 2-Platform/variable.tf file ##Replace with your actual IP address at line no. 118 terraform init terraform apply -auto-approve
Refer README.MD from folder to know more about steps to upload SSL certificate.
Step 3 - Run below command for 3-Application
cd ../3-Application ##In terraform.tfvars you can provide the version of knowhow that you want to install. (Example - 7.2.0) terraform init terraform apply -auto-approve
How do we run terraform script to install knowhow on ECS with existing services:
Commenting Existing Resource Blocks:
When you're working with existing resources, you generally wouldn't want to recreate them or use existing resources, as that could lead to unintended changes . Instead, you can import the existing resources into your Terraform state.
For example, if you have an existing AWS ECS Cluster, you would typically comment out the resource block for the ECS Cluster in your Terraform configuration. This prevents Terraform from trying to manage the resource.
Here's an example of what an AWS ECS resource block look like in Terraform and by adding “#“ lines will be commented as shown bellow:
#resource "aws_ecs_cluster" "PSKnowHOW-Cluster" { # name = var.ecs_cluster_name #}
Import Existing Resources:
To start managing existing resources with Terraform, you need to import them into the Terraform state. This is done using the terraform import
command. The command requires the resource type and name from your configuration, along with the actual resource identifier from the cloud provider.
For example, to import an existing VPC into Terraform state:
terraform import aws_vpc.example_vpc example-vpc
Using Output to Share Resource Information:
Once you've imported the existing resource into Terraform state, you can utilize the information about that resource by using outputs. Outputs allow you to share information from your Terraform configuration with other parts of your infrastructure or external services.
In your output.tf
file, you can define an output for the imported VPC ID:
output "imported_VPC_id" { value = aws_vpc.example_vpc.id }
Now, any other Terraform configurations or external systems can reference this output value to interact with the existing VPC.
Remember, while importing existing resources into Terraform can be convenient, it's important to plan carefully to avoid unintended changes or conflicts between your existing resources and your Terraform-managed infrastructure.
In summary, the process involves:
Commenting out the existing resource block in your Terraform configuration.
Importing the existing resource into Terraform state using the terraform import command.
Defining an output in your output.tf file to share the resource's information with other parts of your infrastructure.