Skip to main content

AWS ECS Deployment Tool With Terraform

Project description

Introduce

ECS deploy using docker compose and terraform.

You need to manage just yml file for docker compose and:

ecsdep cluster create
ecsdep service up

That's all.

Running Docker

Local Testing

Docker contains terrform, awscli and ecsdep.

docker run -d --privileged \
    --name docker \
    -v /path/to/myproject:/app \
    hansroh/dep:dind
docker exec -it docker bash

Within docker,

pip3 install -U ecsdep

Gitlab CI/CD

Add these lines into .gitlab-ci.yml:

image: hansroh/dep:latest
services:
  - name: docker:dind
    alias: dind-service
before_script:
  - pip3 install -U ecsdep

Creating ECS Cluster and Deploy Services

Creating Base Dcoker Compose Config File

Create Dockerfile and docker-compose.yml.

This example launch 1 container named skitai-app.

version: '3.3'

services:
  skitai-app:
    image: registry.gitlab.com/skitai/ecsdep
    container_name: skitai-app
    build:
      context: .
      dockerfile: ./Dockerfile
    ports:
      - 5000:5000

Make sure service name (skitai-app) is same as container_name.

And test image build and container.

docker-compose build
docker-compose up -d
docker-compose down

Adding ECS Related Settings

Specify Deploy Containers

Add services.skitai-app.deploy key. Otherwise this container will not be included.

services:
  skitai-app:
    image: registry.gitlab.com/skitai/ecsdep
    deploy:

Image Registry Pull Credential

For private registry, you need credential. See this.

services:
  skitai-app:
    image: registry.gitlab.com/skitai/ecsdep
    x-ecs-pull-credentials: arn:aws:secretsmanager:ap-northeast-2:000000000:secret:gitlab/registry/mysecret-PrENMF

Logging (Optaional)

For integrating Cloud Watch log group.

services:
  skitai-app:
    logging:
      x-ecs-driver: awslogs

Container Health Checking (Optaional)

    healthcheck:
      test:
        - "CMD-SHELL"
        - "wget -O/dev/null -q http://localhost:5000 || exit 1"

Container Level Reosurce Requirements

For minimal memory reservation - called soft memory limit,

services:
  skitai-app:
    deploy:
      resources:
        reservations:
          memory: "256M"

For hard memory limit,

    deploy:
      resources:
        reservations:
          memory: "256M"
        limits:
          memory: "320M"

Make sure hard limits must be greater or equal than reservation value.

For miniaml CPU units reservation,

    deploy:
      resources:
        reservations:
          cpus: "1024"

1024 means 1 vCPU. ECS will deploys container which fulfills these reservation requirements. Value of cpus must be string.

Maybe you are using GPU,

    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

But ecsdep ignores above statement. Just add x-ecs-gpus.

    deploy:
      resources:
        reservations:
          x-ecs-gpus: 1
          devices: ...

Ok, test docker build and container.

docker-compose build

Terraform Setting

services:
  skitai-app:
     ...

x-terraform:
  provider: aws
  region: ap-northeast-2
  template-version: 1.1
  state-backend:
    region: "ap-northeast-2"
    bucket: "states-data"
    key-prefix: "terraform/ecs-cluster"

Make sure you create s3 bucket.

Cluster Settings

Prerequisitions

  • AWS/Gitlab credentials

    docker login -u <gitlab username> -p <gitlab deploy-token> registry.gitlab.com
    aws configure set aws_access_key_id <AWS_ECS_ACCESS_KEY_ID>
    aws configure set aws_secret_access_key <AWS_ECS_SECRET_ACCESS_KEY>
    
  • AWS certification for your service domain (ex. mydomain.com and *.mydomain.com)

  • SSH key file, for generating key file,

    ssh-keygen -t rsa -b 2048 -C "email@example.com" -f ./mykeypair
    

    It makes 2 files. mykeypair is private key and mykeypair.pub is public key. Please keep private key file carefully for accessing your EC2 instances.

    Also mykeypair.pub must be placed with current yaml file.

services:
  skitai-app:
     ...

x-ecs-cluster:
  name: my-cluster
  public-key-file: "mykeypair.pub"
  instance-type: t3.medium
  ami: amzn2-ami-ecs-hvm-*-x86_64-*
  autoscaling:
    desired: 2
    min: 2
    max: 10
    cpu: 60
    memory: 60
  loadbalancer:
    cert-name: mydoamin.com
  vpc:
    cidr_block: 10.0.0.0/16

It creates resources like:

  • Load Balancer
  • VPC with subnets: 10.0.10.0/24, 10.0.20.0/24 and 10.0.30.0/24
  • Cluster Instance Auto Scaling Launch Configuration
  • Cluster Auto Scaling Group
  • Cluster

Cluster-Level Auto Scaling

autoscaling.cpu mean that your cpu reservations of your containers reaches 60% of all cluster instances's CPU units, scale out ECS instances. If 0, nothing will be happen.

autoscaling.memory is same.

Instance Type

If you specified x-ecs-gpus, instance-type must have GPU.

VPC

If you consider VPC peering, choose carefully cidr_block.

Createing/Destroying ECS Cluster

Finally you can create cluster,

ecsdep find default yaml named as docker-compose.yml or compose.ecs.yml.

ecsdep cluster create
ecsdep cluster destroy

Specifying file is also possible.

ecsdep -f /path/to/docker-compose.yml cluster create
ecsdep -f /path/to/docker-compose.yml cluster destroy

Deploying Service

x-ecs-service:
  name: ecsdep
  stages:
    default:
      env-service-stage: "qa"
      hosts: ["qa.myservice.com"]
      listener-priority: 100
    production:
      env-service-stage: "production"
      hosts: ["myservice.com"]
      listener-priority: 101
      autoscaling:
        min: 3
        max: 7
  loadbalancing:
    pathes:
      - /*
    protocol: http
    healthcheck:
      path: "/ping"
      matcher: "200,301,302,404"
  deploy:
    compatibilities:
      - ec2
    resources:
      limits:
        memory: 256M
        cpus: "1024"
    autoscaling:
      desired: 1
      min: 1
      max: 4
      cpu: 100
      memory: 80

Staging

stages can make deploy stages like production, qa or staging. It find your current environment variable named SERVICE_STAGE.

export SERVICE_STAGE=qa

Current deploy stages is selected by current SERVICE_STAGE value matched with env-service-stage.

If SERVICE_STAGE is qa, your container routed qa.myservice.com by load balancer.

Auto Scaling

Default container-level auto scaling settings are placed in x-ecs-service.autoscaling. But each stages can overwrite this values by x-ecs-service.[workspace name].autoscaling.

Load Balancing (Optional)

Your container routed to loadbalancing.pathes by your load balancer.

Using Fargate

If you wnat fargate deploy, set x-ecs-service.deploy.compatibilities to - fargate

Resource Limiting

It is required when 'fargate' laubching, but it is optional for 'ec2' launching.

x-ecs-service.deploy.autoscaling.cpu and x-ecs-service.deploy.autoscaling.memory are both percentages of reserved cpu units or memory MBytes.

If x-ecs-service.deploy.resources.limits.cpus is defined, service cannot use units over this value.

If x-ecs-service.deploy.resources.limits.memory is defined and your container over this value, container will be terminated.

Deploying Service

you need env var CI_COMMIT_SHA. It is used as image tag with first 8chars from git commit hash string. it will be provided on gitlab runner, but at local testing, lateset is just OK.

export CI_COMMIT_SHA=latest
export SERVICE_STAGE=qa

ecsdep -f dep/compose.ecs.yml service up

Whenever commanding ecsdep service up, your containers will be deployed to ECS by rolling update way.

As a results, AWS resources will be created.

  • Task Definition
  • Update Service and Run

Note: Sometimes, service-level auto-scaling settings are not applied at initial deploy. I don't know why but please deploy twice in this case.

Shutdown/Remove Service

ecsdep service down

Deploying Other Services Into Cluster

It is recommended cluster settings are kept in your main app only.

In other service's yml, keep x-ecs-cluster.name only and remove other x-ecs-cluster settings.

You just care about x-ecs-service and services definition.

services:
  skitai-app-2:
     ...

x-terrform:
  ...

x-ecs-cluster:
  name: my-cluster

Deploying Service With Multiple Containers

services:
  skitai-app:
    deploy:
    ports:
      - 5000
    healthcheck:
      test:
        - "CMD-SHELL"
        - "wget -O/dev/null -q http://localhost:5000 || exit 1"

  skitai-nginx:
    depends_on:
      - skitai-app
    x-ecs-wait-conditions:
      - HEALTHY
    ports:
      - "80:80"

Make sure only single loadbalancable container can have host port mapping like "80:80", other wise just docker internal port like "5000".

Deploying Non Web Service

Get rid of ports and load balancing settings like:

  • services.your-app.ports
  • x-ecs-service.loadbalancing
  • x-ecs-service.stages.default.hosts
  • x-ecs-service.stages.default.listener-priority

Using Secrets

version: '3.3'

services:
  skitai-app:
    environment:
      DB_PASSWORD=$DB_PASSWORD

secrets:
  DB_PASSWORD:
    name: "arn:aws:secretsmanager:ap-northeast-1:0000000000:secret:gitlab/registry/hansroh-PrENMF:DBPASSWORD::"
    external: true

At deploy time, environment DB_PASSWORD is overwritten with secrets.DB_PASSWORD.name value by AWS ECS Service. see AWS Secret Manager document.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ecsdep-0.1.6-py3-none-any.whl (28.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page