Skip to main content

AWS ECS Deployment Tool With Terraform

Project description

Introduce

ECS deploy using docker compose and terraform.

You need to manage just yml file for docker compose and:

ecsdep cluster create
ecsdep service up

That's all.

Prequisition

Gitlab Repository Read Credential

Create Gitlab access token in Project Settings > Access Tokens, then check read/write registry grants.

With this token, create secret in AWS

https://console.aws.amazon.com/secretsmanager/

{
  "username" : "<gitlab username>",
  "password" : "<access token>"
}

And save secret arn.

Issuing Domain Certification

It need AWS domain certification for linking load balancer

S3 Bucket For Terraform State Backend

Create bucket like terraform.my-company.com

SSH Key Pairs

SSH key file, for generating key file,

ssh-keygen -t rsa -b 2048 -C "email@example.com" -f ./mykeypair

It makes 2 files. mykeypair is private key and mykeypair.pub is public key. Please keep private key file carefully for accessing your EC2 instances.

AWS Access Key For ECS Deploy

Create policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "acm:*",
                "application-autoscaling:*",
                "autoscaling:*",
                "cloudformation:*",
                "cognito-identity:*",
                "ec2:*",
                "ecs:*",
                "elasticloadbalancing:*",
                "iam:*",
                "kms:DescribeKey",
                "kms:ListAliases",
                "kms:ListKeys",
                "logs:*",
                "route53:*",
                "s3:*",
                "secretsmanager:*",
                "servicediscovery:*",
                "tag:GetResources"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        },
        {
            "Action": "iam:PassRole",
            "Effect": "Allow",
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": "ecs-tasks.amazonaws.com"
                }
            }
        },
        {
            "Action": "iam:PassRole",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:iam::*:role/ecsInstanceRole*"
            ],
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": [
                        "ec2.amazonaws.com",
                        "ec2.amazonaws.com.cn"
                    ]
                }
            }
        },
        {
            "Action": "iam:PassRole",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:iam::*:role/ecsAutoscaleRole*"
            ],
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": [
                        "application-autoscaling.amazonaws.com",
                        "application-autoscaling.amazonaws.com.cn"
                    ]
                }
            }
        }
    ]
}

Create user and save access key.

Starting Deploy Docker

Local Deploy

Docker contains terrform, awscli and ecsdep.

docker run -d --privileged \
    --name ecsdep \
    -v /path/to/myproject:/app \
    hansroh/dep:dind
docker exec -it ecsdep bash

Within docker,

pip3 install -U ecsdep

Gitlab CI/CD Deploy

Add these lines into .gitlab-ci.yml:

image: hansroh/dep:latest
services:
  - name: docker:dind
    alias: dind-service
before_script:
  - pip3 install -U ecsdep

Setup Deploy Environment

Docker Login

docker login -u <username> -p <personal access token> registry.gitlab.com

Confoguring AWS Access Key

First of all, configure AWS access key,

aws configure set aws_access_key_id <value>
aws configure set aws_secret_access_key <value>
aws configure set region ap-northeast-2

Creating ECS Cluster

Create file with docker-compose.yml or docker.ecs.yml.

Terraform Setting

x-terraform:
  provider: aws
  region: ap-northeast-2
  template-version: 1.1
  state-backend:
    region: "ap-northeast-2"
    bucket: "states-data"
    key-prefix: "terraform/ecs-cluster"

Make sure you create s3 bucket.

Cluster Settings

x-ecs-cluster:
  name: my-cluster
  public-key-file: "mykeypair.pub"
  instance-type: t3.medium
  ami: amzn2-ami-ecs-hvm-*-x86_64-*
  autoscaling:
    desired: 2
    min: 2
    max: 10
    cpu: 60
    memory: 60
    target-capacity: 0
  loadbalancer:
    cert-name: mydoamin.com
  vpc:
    cidr_block: 10.0.0.0/16

It creates resources like:

  • Load Balancer
  • VPC with subnets: 10.0.10.0/24, 10.0.20.0/24 and 10.0.30.0/24
  • Cluster Instance Auto Scaling Launch Configuration
  • Cluster Auto Scaling Group
  • Cluster

Cluster-Level Auto Scaling

  • autoscaling.cpu mean that your cpu reservations of your containers reaches 60% of all cluster instances's CPU units, scale out ECS instances. If 0, nothing will be happen.
  • autoscaling.memory is same.
  • autoscaling.target-capacity try to keep new instances by percent of desired.
    • value is 100: auto scaling group doesn't need to scale in or scale out
    • value under 100: at least one instance that's not running a non-daemon task

Instance Type

If you specified x-ecs-gpus, instance-type must have GPU.

VPC

If you consider VPC peering, choose carefully cidr_block. If you would like to create default VPC, just remove vpc key.

For VPC peering,

x-ecs-cluster:
  vpc:
    peering_vpc_ids:
      - default # mean default VPC
      - vpc-e2ecb79ef6da46

Note that this will be work only if your own VPCs.

Container AWS Resouce Access Policies

x-ecs-cluster:
  task-iam-policies:
    - arn:aws:iam::aws:policy/AmazonS3FullAccess

Creating/Destroying ECS Cluster

Finally you can create cluster,

ecsdep find default yaml named as docker-compose.yml or compose.ecs.yml.

ecsdep cluster create
ecsdep cluster destroy

Specifying file is also possible.

ecsdep -f /path/to/docker-compose.yml cluster create
ecsdep -f /path/to/docker-compose.yml cluster destroy

Creating S3 Bucket and Cognito Pool

x-ecs-cluster:
  s3-cors-hosts:
    - mayservice.com

Configuring Containsers

This example launch 1 container named skitai-app.

version: '3.3'

services:
  skitai-app:
    image: registry.gitlab.com/skitai/ecsdep
    container_name: skitai-app
    build:
      context: .
      dockerfile: ./Dockerfile
    ports:
      - 5000:5000

Make sure service name (skitai-app) is same as container_name.

And test image build and container.

docker-compose build
docker-compose up -d
docker-compose down

Adding ECS Related Settings

Specify Deploy Containers

Add services.skitai-app.deploy key. Otherwise this container will not be included in ECS service.

services:
  skitai-app:
    image: registry.gitlab.com/skitai/ecsdep
    deploy:

Docker Container Registry Pull Credential

services:
  skitai-app:
    image: registry.gitlab.com/skitai/ecsdep
    x-ecs-pull-credentials: arn:aws:secretsmanager:ap-northeast-2:000000000:secret:gitlab/registry/mysecret-PrENMF

See Prequisition section.

Logging (Optaional)

For integrating Cloud Watch log group.

services:
  skitai-app:
    logging:
      x-ecs-driver: awslogs

Container Health Checking (Optaional)

    healthcheck:
      test:
        - "CMD-SHELL"
        - "wget -O/dev/null -q http://localhost:5000/ping || exit 1"

Container Level Reosurce Requirements

For minimal memory reservation - called soft memory limit,

services:
  skitai-app:
    deploy:
      resources:
        reservations:
          memory: "256M"

For hard memory limit,

    deploy:
      resources:
        reservations:
          memory: "256M"
        limits:
          memory: "320M"

Make sure hard limits must be greater or equal than reservation value.

For miniaml CPU units reservation,

    deploy:
      resources:
        reservations:
          cpus: "1024"

1024 means 1 vCPU. ECS will deploys container which fulfills these reservation requirements. Value of cpus must be string.

Maybe you are using GPU,

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

But ecsdep ignores above statement. Just add x-ecs-gpus.

    deploy:
      resources:
        reservations:
          x-ecs-gpus: 1
          devices: ...

Ok, test docker build and container.

docker-compose build

Configuring ECS Task Definition

x-ecs-service:
  name: ecsdep
  stages:
    default:
      env-service-stage: "qa"
      hosts: ["qa.myservice.com"]
      listener-priority: 100
    production:
      env-service-stage: "production"
      hosts: ["myservice.com"]
      listener-priority: 101
      autoscaling:
        min: 3
        max: 7
  loadbalancing:
    pathes:
      - /*
    protocol: http
    healthcheck:
      path: "/ping"
      matcher: "200,301,302,404"
  deploy:
    compatibilities:
      - ec2
    resources:
      limits:
        memory: 256M
        cpus: "1024"
    autoscaling:
      desired: 1
      min: 1
      max: 4
      cpu: 100
      memory: 80
    strategy:
      minimum_healthy_percent: 50
      maximum_percent: 150

Staging

stages can make deploy stages like production, qa or staging. It find your current environment variable named SERVICE_STAGE.

export SERVICE_STAGE=qa

Current deploy stages is selected by current SERVICE_STAGE value matched with env-service-stage.

If SERVICE_STAGE is qa, your container routed qa.myservice.com by load balancer.

Deployment Strategy

  • strategy.maximum_percent: The upper limit (as a percentage of the service's desiredCount) of the number of running tasks that can be running in a service during a deployment
  • strategy.minimum_healthy_percent: The lower limit (as a percentage of the service's desiredCount) of the number of running tasks that must remain running and healthy in a service during a deployment

Auto Scaling

Default container-level auto scaling settings are placed in x-ecs-service.autoscaling. But each stages can overwrite this values by x-ecs-service.[workspace name].autoscaling.

Load Balancing (Optional)

Your container routed to loadbalancing.pathes by your load balancer.

Using Fargate

If you wnat fargate deploy, set x-ecs-service.deploy.compatibilities to - fargate

Resource Limiting

It is required when 'fargate' laubching, but it is optional for 'ec2' launching.

x-ecs-service.deploy.autoscaling.cpu and x-ecs-service.deploy.autoscaling.memory are both percentages of reserved cpu units or memory MBytes.

If x-ecs-service.deploy.resources.limits.cpus is defined, service cannot use units over this value.

If x-ecs-service.deploy.resources.limits.memory is defined and your container over this value, container will be terminated.

Deploying Service

you need env var CI_COMMIT_SHA. It is used as image tag with first 8chars from git commit hash string. it will be provided on gitlab runner, but at local testing, lateset is just OK.

export CI_COMMIT_SHA=latest
export SERVICE_STAGE=qa

ecsdep -f dep/compose.ecs.yml service up

Whenever commanding ecsdep service up, your containers will be deployed to ECS by rolling update way.

As a results, AWS resources will be created.

  • Task Definition
  • Update Service and Run

Note: Sometimes, service-level auto-scaling settings are not applied at initial deploy. I don't know why but please deploy twice in this case.

Shutdown/Remove Service

ecsdep service down

Deploying Other Services Into Cluster

It is recommended cluster settings are kept in your main app only.

In other service's yml, keep x-ecs-cluster.name only and remove other x-ecs-cluster settings.

You just care about x-ecs-service and services definition.

services:
  skitai-app-2:
     ...

x-terrform:
  ...

x-ecs-cluster:
  name: my-cluster

Deploying Service With Multiple Containers

services:
  skitai-app:
    deploy:
    ports:
      - 5000
    healthcheck:
      test:
        - "CMD-SHELL"
        - "wget -O/dev/null -q http://localhost:5000 || exit 1"

  skitai-nginx:
    depends_on:
      - skitai-app
    x-ecs-wait-conditions:
      - HEALTHY
    ports:
      - "80:80"

Make sure only single loadbalancable container can have host port mapping like "80:80", other wise just docker internal port like "5000".

Deploying Non Web Service

Get rid of ports and load balancing settings like:

  • services.your-app.ports
  • x-ecs-service.loadbalancing
  • x-ecs-service.stages.default.hosts
  • x-ecs-service.stages.default.listener-priority

Using Secrets

version: '3.3'

services:
  skitai-app:
    environment:
      DB_PASSWORD=$DB_PASSWORD

secrets:
  DB_PASSWORD:
    name: "arn:aws:secretsmanager:ap-northeast-1:0000000000:secret:gitlab/registry/hansroh-PrENMF:DBPASSWORD::"
    external: true

At ECS deploy time, environment DB_PASSWORD will be overwritten with secrets.DB_PASSWORD.name value by AWS ECS Service.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ecsdep-0.2.12-py3-none-any.whl (31.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page