Skip to main content

The CDK Construct Library for AWS::Batch

Project description

AWS Batch Construct Library

---

cdk-constructs: Experimental

The APIs of higher level constructs in this module are experimental and under active development. They are subject to non-backward compatible changes or removal in any future version. These are not subject to the Semantic Versioning model and breaking changes will be announced in the release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.


This module is part of the AWS Cloud Development Kit project.

AWS Batch is a batch processing tool for efficiently running hundreds of thousands computing jobs in AWS. Batch can dynamically provision different types of compute resources based on the resource requirements of submitted jobs.

AWS Batch simplifies the planning, scheduling, and executions of your batch workloads across a full range of compute services like Amazon EC2 and Spot Resources.

Batch achieves this by utilizing queue processing of batch job requests. To successfully submit a job for execution, you need the following resources:

  1. Job Definition - Group various job properties (container image, resource requirements, env variables...) into a single definition. These definitions are used at job submission time.
  2. Compute Environment - the execution runtime of submitted batch jobs
  3. Job Queue - the queue where batch jobs can be submitted to via AWS SDK/CLI

For more information on AWS Batch visit the AWS Docs for Batch.

Compute Environment

At the core of AWS Batch is the compute environment. All batch jobs are processed within a compute environment, which uses resource like OnDemand/Spot EC2 instances or Fargate.

In MANAGED mode, AWS will handle the provisioning of compute resources to accommodate the demand. Otherwise, in UNMANAGED mode, you will need to manage the provisioning of those resources.

Below is an example of each available type of compute environment:

# vpc: ec2.Vpc


# default is managed
aws_managed_environment = batch.ComputeEnvironment(self, "AWS-Managed-Compute-Env",
    compute_resources=batch.ComputeResources(
        vpc=vpc
    )
)

customer_managed_environment = batch.ComputeEnvironment(self, "Customer-Managed-Compute-Env",
    managed=False
)

Spot-Based Compute Environment

It is possible to have AWS Batch submit spotfleet requests for obtaining compute resources. Below is an example of how this can be done:

vpc = ec2.Vpc(self, "VPC")

spot_environment = batch.ComputeEnvironment(self, "MySpotEnvironment",
    compute_resources=batch.ComputeResources(
        type=batch.ComputeResourceType.SPOT,
        bid_percentage=75,  # Bids for resources at 75% of the on-demand price
        vpc=vpc
    )
)

Compute Environments and Security Groups

Compute Environments implement the IConnectable interface, which means you can use connections on other CDK resources to manipulate the security groups and allow access.

For example, allowing a Compute Environment to access an EFS filesystem:

import aws_cdk.aws_efs as efs

# file_system: efs.FileSystem
# compute_environment: batch.ComputeEnvironment


file_system.connections.allow_default_port_from(compute_environment)

Fargate Compute Environment

It is possible to have AWS Batch submit jobs to be run on Fargate compute resources. Below is an example of how this can be done:

vpc = ec2.Vpc(self, "VPC")

fargate_spot_environment = batch.ComputeEnvironment(self, "MyFargateEnvironment",
    compute_resources=batch.ComputeResources(
        type=batch.ComputeResourceType.FARGATE_SPOT,
        vpc=vpc
    )
)

Understanding Progressive Allocation Strategies

AWS Batch uses an allocation strategy to determine what compute resource will efficiently handle incoming job requests. By default, BEST_FIT will pick an available compute instance based on vCPU requirements. If none exist, the job will wait until resources become available. However, with this strategy, you may have jobs waiting in the queue unnecessarily despite having more powerful instances available. Below is an example of how that situation might look like:

Compute Environment:

1. m5.xlarge => 4 vCPU
2. m5.2xlarge => 8 vCPU
Job Queue:
---------
| A | B |
---------

Job Requirements:
A => 4 vCPU - ALLOCATED TO m5.xlarge
B => 2 vCPU - WAITING

In this situation, Batch will allocate Job A to compute resource #1 because it is the most cost efficient resource that matches the vCPU requirement. However, with this BEST_FIT strategy, Job B will not be allocated to our other available compute resource even though it is strong enough to handle it. Instead, it will wait until the first job is finished processing or wait a similar m5.xlarge resource to be provisioned.

The alternative would be to use the BEST_FIT_PROGRESSIVE strategy in order for the remaining job to be handled in larger containers regardless of vCPU requirement and costs.

Launch template support

Simply define your Launch Template:

my_launch_template = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
    launch_template_name="extra-storage-template",
    launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(
        block_device_mappings=[ec2.CfnLaunchTemplate.BlockDeviceMappingProperty(
            device_name="/dev/xvdcz",
            ebs=ec2.CfnLaunchTemplate.EbsProperty(
                encrypted=True,
                volume_size=100,
                volume_type="gp2"
            )
        )
        ]
    )
)

And provide launchTemplateName:

# vpc: ec2.Vpc
# my_launch_template: ec2.CfnLaunchTemplate


my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
    compute_resources=batch.ComputeResources(
        launch_template=batch.LaunchTemplateSpecification(
            launch_template_name=my_launch_template.launch_template_name
        ),
        vpc=vpc
    ),
    compute_environment_name="MyStorageCapableComputeEnvironment"
)

Or provide launchTemplateId instead:

# vpc: ec2.Vpc
# my_launch_template: ec2.CfnLaunchTemplate


my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
    compute_resources=batch.ComputeResources(
        launch_template=batch.LaunchTemplateSpecification(
            launch_template_id=my_launch_template.ref
        ),
        vpc=vpc
    ),
    compute_environment_name="MyStorageCapableComputeEnvironment"
)

Note that if your launch template explicitly specifies network interfaces, for example to use an Elastic Fabric Adapter, you must use those security groups rather than allow the ComputeEnvironment to define them. This is done by setting useNetworkInterfaceSecurityGroups in the launch template property of the environment. For example:

# vpc: ec2.Vpc


efa_security_group = ec2.SecurityGroup(self, "EFASecurityGroup",
    vpc=vpc
)

launch_template_eFA = ec2.CfnLaunchTemplate(self, "LaunchTemplate",
    launch_template_name="LaunchTemplateName",
    launch_template_data=ec2.CfnLaunchTemplate.LaunchTemplateDataProperty(
        network_interfaces=[ec2.CfnLaunchTemplate.NetworkInterfaceProperty(
            device_index=0,
            subnet_id=vpc.private_subnets[0].subnet_id,
            interface_type="efa",
            groups=[efa_security_group.security_group_id]
        )]
    )
)

compute_environment_eFA = batch.ComputeEnvironment(self, "EFAComputeEnv",
    managed=True,
    compute_resources=batch.ComputeResources(
        vpc=vpc,
        launch_template=batch.LaunchTemplateSpecification(
            launch_template_name=launch_template_eFA.launch_template_name,
            use_network_interface_security_groups=True
        )
    )
)

Importing an existing Compute Environment

To import an existing batch compute environment, call ComputeEnvironment.fromComputeEnvironmentArn().

Below is an example:

compute_env = batch.ComputeEnvironment.from_compute_environment_arn(self, "imported-compute-env", "arn:aws:batch:us-east-1:555555555555:compute-environment/My-Compute-Env")

Change the baseline AMI of the compute resources

Occasionally, you will need to deviate from the default processing AMI.

ECS Optimized Amazon Linux 2 example:

# vpc: ec2.Vpc

my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
    compute_resources=batch.ComputeResources(
        image=ecs.EcsOptimizedAmi(
            generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX_2
        ),
        vpc=vpc
    )
)

Custom based AMI example:

# vpc: ec2.Vpc

my_compute_env = batch.ComputeEnvironment(self, "ComputeEnv",
    compute_resources=batch.ComputeResources(
        image=ec2.MachineImage.generic_linux({
            "[aws-region]": "[ami-ID]"
        }),
        vpc=vpc
    )
)

Job Queue

Jobs are always submitted to a specific queue. This means that you have to create a queue before you can start submitting jobs. Each queue is mapped to at least one (and no more than three) compute environment. When the job is scheduled for execution, AWS Batch will select the compute environment based on ordinal priority and available capacity in each environment.

# compute_environment: batch.ComputeEnvironment

job_queue = batch.JobQueue(self, "JobQueue",
    compute_environments=[batch.JobQueueComputeEnvironment(
        # Defines a collection of compute resources to handle assigned batch jobs
        compute_environment=compute_environment,
        # Order determines the allocation order for jobs (i.e. Lower means higher preference for job assignment)
        order=1
    )
    ]
)

Priorty-Based Queue Example

Sometimes you might have jobs that are more important than others, and when submitted, should take precedence over the existing jobs. To achieve this, you can create a priority based execution strategy, by assigning each queue its own priority:

# shared_compute_envs: batch.ComputeEnvironment

high_prio_queue = batch.JobQueue(self, "JobQueue",
    compute_environments=[batch.JobQueueComputeEnvironment(
        compute_environment=shared_compute_envs,
        order=1
    )],
    priority=2
)

low_prio_queue = batch.JobQueue(self, "JobQueue",
    compute_environments=[batch.JobQueueComputeEnvironment(
        compute_environment=shared_compute_envs,
        order=1
    )],
    priority=1
)

By making sure to use the same compute environments between both job queues, we will give precedence to the highPrioQueue for the assigning of jobs to available compute environments.

Importing an existing Job Queue

To import an existing batch job queue, call JobQueue.fromJobQueueArn().

Below is an example:

job_queue = batch.JobQueue.from_job_queue_arn(self, "imported-job-queue", "arn:aws:batch:us-east-1:555555555555:job-queue/High-Prio-Queue")

Job Definition

A Batch Job definition helps AWS Batch understand important details about how to run your application in the scope of a Batch Job. This involves key information like resource requirements, what containers to run, how the compute environment should be prepared, and more. Below is a simple example of how to create a job definition:

import aws_cdk.aws_ecr as ecr


repo = ecr.Repository.from_repository_name(self, "batch-job-repo", "todo-list")

batch.JobDefinition(self, "batch-job-def-from-ecr",
    container=batch.JobDefinitionContainer(
        image=ecs.EcrImage(repo, "latest")
    )
)

Using a local Docker project

Below is an example of how you can create a Batch Job Definition from a local Docker application.

batch.JobDefinition(self, "batch-job-def-from-local",
    container=batch.JobDefinitionContainer(
        # todo-list is a directory containing a Dockerfile to build the application
        image=ecs.ContainerImage.from_asset("../todo-list")
    )
)

Providing custom log configuration

You can provide custom log driver and its configuration for the container.

import aws_cdk.aws_ssm as ssm


batch.JobDefinition(self, "job-def",
    container=batch.JobDefinitionContainer(
        image=ecs.EcrImage.from_registry("docker/whalesay"),
        log_configuration=batch.LogConfiguration(
            log_driver=batch.LogDriver.AWSLOGS,
            options={"awslogs-region": "us-east-1"},
            secret_options=[
                batch.ExposedSecret.from_parameters_store("xyz", ssm.StringParameter.from_string_parameter_name(self, "parameter", "xyz"))
            ]
        )
    )
)

Using the secret on secrets manager

You can set the environment variables from secrets manager.

db_secret = secretsmanager.Secret(self, "secret")

batch.JobDefinition(self, "batch-job-def-secrets",
    container=batch.JobDefinitionContainer(
        image=ecs.EcrImage.from_registry("docker/whalesay"),
        secrets={
            "PASSWORD": ecs.Secret.from_secrets_manager(db_secret, "password")
        }
    )
)

It is common practice to invoke other AWS services from within AWS Batch jobs (e.g. using the AWS SDK). For this reason, the AWS_ACCOUNT and AWS_REGION environments are always provided by default to the JobDefinition construct with the values inferred from the current context. You can always overwrite them by setting these environment variables explicitly though.

Importing an existing Job Definition

From ARN

To import an existing batch job definition from its ARN, call JobDefinition.fromJobDefinitionArn().

Below is an example:

job = batch.JobDefinition.from_job_definition_arn(self, "imported-job-definition", "arn:aws:batch:us-east-1:555555555555:job-definition/my-job-definition")

From Name

To import an existing batch job definition from its name, call JobDefinition.fromJobDefinitionName(). If name is specified without a revision then the latest active revision is used.

Below is an example:

# Without revision
job1 = batch.JobDefinition.from_job_definition_name(self, "imported-job-definition", "my-job-definition")

# With revision
job2 = batch.JobDefinition.from_job_definition_name(self, "imported-job-definition", "my-job-definition:3")

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws-cdk.aws-batch-alpha-2.42.0a0.tar.gz (126.7 kB view details)

Uploaded Source

Built Distribution

aws_cdk.aws_batch_alpha-2.42.0a0-py3-none-any.whl (125.4 kB view details)

Uploaded Python 3

File details

Details for the file aws-cdk.aws-batch-alpha-2.42.0a0.tar.gz.

File metadata

File hashes

Hashes for aws-cdk.aws-batch-alpha-2.42.0a0.tar.gz
Algorithm Hash digest
SHA256 0e698d445c1f7a9721ed2cc42054cc89d66c430d2768a6c2a51fb28914fc3616
MD5 4c876c7c390eaecd62a7c5f7ddbffb19
BLAKE2b-256 ee711baa2ccca928ad369877f3870d30dcaf61045b391af409451633975be7b9

See more details on using hashes here.

File details

Details for the file aws_cdk.aws_batch_alpha-2.42.0a0-py3-none-any.whl.

File metadata

File hashes

Hashes for aws_cdk.aws_batch_alpha-2.42.0a0-py3-none-any.whl
Algorithm Hash digest
SHA256 b708e97f5bff8d17e107c4628e6b108a6aef9774340ce613cf001020f0cdfe79
MD5 4b7499355b9175d3fae37c706e879225
BLAKE2b-256 3d49e9a21b8aad901251853c906e0009b8961c2ede3289496d43b16001e454ff

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page