Skip to main content

The CDK Construct Library for AWS::EKS

Project description

Amazon EKS Construct Library

---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


This construct library allows you to define Amazon Elastic Container Service for Kubernetes (EKS) clusters programmatically. This library also supports programmatically defining Kubernetes resource manifests within EKS clusters.

This example defines an Amazon EKS cluster with the following configuration:

  • 2x m5.large instances (this instance type suits most common use-cases, and is good value for money)
  • Dedicated VPC with default configuration (see ec2.Vpc)
  • A Kubernetes pod with a container based on the paulbouwer/hello-kubernetes image.
# Example may have issues. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "hello-eks")

cluster.add_resource("mypod",
    api_version="v1",
    kind="Pod",
    metadata={"name": "mypod"},
    spec={
        "containers": [{
            "name": "hello",
            "image": "paulbouwer/hello-kubernetes:1.5",
            "ports": [{"container_port": 8080}]
        }
        ]
    }
)

Here is a complete sample.

Capacity

By default, eks.Cluster is created with x2 m5.large instances.

# Example may have issues. See https://github.com/aws/jsii/issues/826
eks.Cluster(self, "cluster-two-m5-large")

The quantity and instance type for the default capacity can be specified through the defaultCapacity and defaultCapacityInstance props:

# Example may have issues. See https://github.com/aws/jsii/issues/826
eks.Cluster(self, "cluster",
    default_capacity=10,
    default_capacity_instance=ec2.InstanceType("m2.xlarge")
)

To disable the default capacity, simply set defaultCapacity to 0:

# Example may have issues. See https://github.com/aws/jsii/issues/826
eks.Cluster(self, "cluster-with-no-capacity", default_capacity=0)

The cluster.defaultCapacity property will reference the AutoScalingGroup resource for the default capacity. It will be undefined if defaultCapacity is set to 0:

# Example may have issues. See https://github.com/aws/jsii/issues/826
cluster = eks.Cluster(self, "my-cluster")
cluster.default_capacity.scale_on_cpu_utilization("up",
    target_utilization_percent=80
)

You can add customized capacity through cluster.addCapacity() or cluster.addAutoScalingGroup():

# Example may have issues. See https://github.com/aws/jsii/issues/826
cluster.add_capacity("frontend-nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    desired_capacity=3,
    vpc_subnets={"subnet_type": ec2.SubnetType.PUBLIC}
)

Spot Capacity

If spotPrice is specified, the capacity will be purchased from spot instances:

# Example may have issues. See https://github.com/aws/jsii/issues/826
cluster.add_capacity("spot",
    spot_price="0.1094",
    instance_type=ec2.InstanceType("t3.large"),
    max_capacity=10
)

Spot instance nodes will be labeled with lifecycle=Ec2Spot and tainted with PreferNoSchedule.

The Spot Termination Handler DaemonSet will be installed on these nodes. The termination handler leverages EC2 Spot Instance Termination Notices to gracefully stop all pods running on spot nodes that are about to be terminated.

Bootstrapping

When adding capacity, you can specify options for /etc/eks/boostrap.sh which is responsible for associating the node to the EKS cluster. For example, you can use kubeletExtraArgs to add custom node labels or taints.

# Example may have issues. See https://github.com/aws/jsii/issues/826
# up to ten spot instances
cluster.add_capacity("spot",
    instance_type=ec2.InstanceType("t3.large"),
    desired_capacity=2,
    bootstrap_options={
        "kubelet_extra_args": "--node-labels foo=bar,goo=far",
        "aws_api_retry_attempts": 5
    }
)

To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled to false when you add the capacity.

Masters Role

The Amazon EKS construct library allows you to specify an IAM role that will be granted system:masters privileges on your cluster.

Without specifying a mastersRole, you will not be able to interact manually with the cluster.

The following example defines an IAM role that can be assumed by all users in the account and shows how to use the mastersRole property to map this role to the Kubernetes system:masters group:

# Example may have issues. See https://github.com/aws/jsii/issues/826
# first define the role
cluster_admin = iam.Role(self, "AdminRole",
    assumed_by=iam.AccountRootPrincipal()
)

# now define the cluster and map role to "masters" RBAC group
eks.Cluster(self, "Cluster",
    masters_role=cluster_admin
)

When you cdk deploy this CDK app, you will notice that an output will be printed with the update-kubeconfig command.

Something like this:

Outputs:
eks-integ-defaults.ClusterConfigCommand43AAE40F = aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y

Copy & paste the "aws eks update-kubeconfig ..." command to your shell in order to connect to your EKS cluster with the "masters" role.

Now, given AWS CLI is configured to use AWS credentials for a user that is trusted by the masters role, you should be able to interact with your cluster through kubectl (the above example will trust all users in the account).

For example:

$ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98Y
Added new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config

$ kubectl get nodes # list all nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-147-66.eu-west-2.compute.internal    Ready    <none>   21m   v1.13.7-eks-c57ff8
ip-10-0-169-151.eu-west-2.compute.internal   Ready    <none>   21m   v1.13.7-eks-c57ff8

$ kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/aws-node-fpmwv             1/1     Running   0          21m
pod/aws-node-m9htf             1/1     Running   0          21m
pod/coredns-5cb4fb54c7-q222j   1/1     Running   0          23m
pod/coredns-5cb4fb54c7-v9nxx   1/1     Running   0          23m
pod/kube-proxy-d4jrh           1/1     Running   0          21m
pod/kube-proxy-q7hh7           1/1     Running   0          21m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   172.20.0.10   <none>        53/UDP,53/TCP   23m

NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/aws-node     2         2         2       2            2           <none>          23m
daemonset.apps/kube-proxy   2         2         2       2            2           <none>          23m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   2/2     2            2           23m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-5cb4fb54c7   2         2         2       23m

For your convenience, an AWS CloudFormation output will automatically be included in your template and will be printed when running cdk deploy.

NOTE: if the cluster is configured with kubectlEnabled: false, it will be created with the role/user that created the AWS CloudFormation stack. See Kubectl Support for details.

Kubernetes Resources

The KubernetesResource construct or cluster.addResource method can be used to apply Kubernetes resource manifests to this cluster.

The following examples will deploy the paulbouwer/hello-kubernetes service on the cluster:

# Example may have issues. See https://github.com/aws/jsii/issues/826
app_label = {"app": "hello-kubernetes"}

deployment = {
    "api_version": "apps/v1",
    "kind": "Deployment",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "replicas": 3,
        "selector": {"match_labels": app_label},
        "template": {
            "metadata": {"labels": app_label},
            "spec": {
                "containers": [{
                    "name": "hello-kubernetes",
                    "image": "paulbouwer/hello-kubernetes:1.5",
                    "ports": [{"container_port": 8080}]
                }
                ]
            }
        }
    }
}

service = {
    "api_version": "v1",
    "kind": "Service",
    "metadata": {"name": "hello-kubernetes"},
    "spec": {
        "type": "LoadBalancer",
        "ports": [{"port": 80, "target_port": 8080}],
        "selector": app_label
    }
}

# option 1: use a construct
KubernetesResource(self, "hello-kub",
    cluster=cluster,
    manifest=[deployment, service]
)

# or, option2: use `addResource`
cluster.add_resource("hello-kub", service, deployment)

Since Kubernetes resources are implemented as CloudFormation resources in the CDK. This means that if the resource is deleted from your code (or the stack is deleted), the next cdk deploy will issue a kubectl delete command and the Kubernetes resources will be deleted.

AWS IAM Mapping

As described in the Amazon EKS User Guide, you can map AWS IAM users and roles to Kubernetes Role-based access control (RBAC).

The Amazon EKS construct manages the aws-auth ConfigMap Kubernetes resource on your behalf and exposes an API through the cluster.awsAuth for mapping users, roles and accounts.

Furthermore, when auto-scaling capacity is added to the cluster (through cluster.addCapacity or cluster.addAutoScalingGroup), the IAM instance role of the auto-scaling group will be automatically mapped to RBAC so nodes can connect to the cluster. No manual mapping is required any longer.

NOTE: cluster.awsAuth will throw an error if your cluster is created with kubectlEnabled: false.

For example, let's say you want to grant an IAM user administrative privileges on your cluster:

# Example may have issues. See https://github.com/aws/jsii/issues/826
admin_user = iam.User(self, "Admin")
cluster.aws_auth.add_user_mapping(admin_user, groups=["system:masters"])

A convenience method for mapping a role to the system:masters group is also available:

# Example may have issues. See https://github.com/aws/jsii/issues/826
cluster.aws_auth.add_masters_role(role)

Node ssh Access

If you want to be able to SSH into your worker nodes, you must already have an SSH key in the region you're connecting to and pass it, and you must be able to connect to the hosts (meaning they must have a public IP and you should be allowed to connect to them on port 22):

# Example may have issues. See https://github.com/aws/jsii/issues/826
asg = cluster.add_capacity("Nodes",
    instance_type=ec2.InstanceType("t2.medium"),
    vpc_subnets={"subnet_type": ec2.SubnetType.PUBLIC},
    key_name="my-key-name"
)

# Replace with desired IP
asg.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(22))

If you want to SSH into nodes in a private subnet, you should set up a bastion host in a public subnet. That setup is recommended, but is unfortunately beyond the scope of this documentation.

kubectl Support

When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC configuration.

In order to allow programmatically defining Kubernetes resources in your AWS CDK app and provisioning them through AWS CloudFormation, we will need to assume this "masters" role every time we want to issue kubectl operations against your cluster.

At the moment, the AWS::EKS::Cluster AWS CloudFormation resource does not support this behavior, so in order to support "programmatic kubectl", such as applying manifests and mapping IAM roles from within your CDK application, the Amazon EKS construct library uses a custom resource for provisioning the cluster. This custom resource is executed with an IAM role that we can then use to issue kubectl commands.

The default behavior of this library is to use this custom resource in order to retain programmatic control over the cluster. In other words: to allow you to define Kubernetes resources in your CDK code instead of having to manage your Kubernetes applications through a separate system.

One of the implications of this design is that, by default, the user who provisioned the AWS CloudFormation stack (executed cdk deploy) will not have administrative privileges on the EKS cluster.

  1. Additional resources will be synthesized into your template (the AWS Lambda function, the role and policy).
  2. As described in Interacting with Your Cluster, if you wish to be able to manually interact with your cluster, you will need to map an IAM role or user to the system:masters group. This can be either done by specifying a mastersRole when the cluster is defined, calling cluster.awsAuth.addMastersRole or explicitly mapping an IAM role or IAM user to the relevant Kubernetes RBAC groups using cluster.addRoleMapping and/or cluster.addUserMapping.

If you wish to disable the programmatic kubectl behavior and use the standard AWS::EKS::Cluster resource, you can specify kubectlEnabled: false when you define the cluster:

# Example may have issues. See https://github.com/aws/jsii/issues/826
eks.Cluster(self, "cluster",
    kubectl_enabled=False
)

Take care: a change in this property will cause the cluster to be destroyed and a new cluster to be created.

When kubectl is disabled, you should be aware of the following:

  1. When you log-in to your cluster, you don't need to specify --role-arn as long as you are using the same user that created the cluster.
  2. As described in the Amazon EKS User Guide, you will need to manually edit the aws-auth ConfigMap when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster.
  3. Any eks.Cluster APIs that depend on programmatic kubectl support will fail with an error: cluster.addResource, cluster.awsAuth, props.mastersRole.

Roadmap

  • AutoScaling (combine EC2 and Kubernetes scaling)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws-cdk.aws-eks-1.16.0.tar.gz (159.1 kB view hashes)

Uploaded Source

Built Distribution

aws_cdk.aws_eks-1.16.0-py3-none-any.whl (155.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page