Skip to main content

Constructs for deploying contents to S3 buckets

Project description

AWS S3 Deployment Construct Library

---

End-of-Support

AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2.

For more information on how to migrate, see the Migrating to AWS CDK v2 guide.


This library allows populating an S3 bucket with the contents of .zip files from other S3 buckets or from local disk.

The following example defines a publicly accessible S3 bucket with web hosting enabled and populates it from a local directory on disk.

website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=website_bucket,
    destination_key_prefix="web/static"
)

This is what happens under the hood:

  1. When this stack is deployed (either via cdk deploy or via CI/CD), the contents of the local website-dist directory will be archived and uploaded to an intermediary assets bucket. If there is more than one source, they will be individually uploaded.
  2. The BucketDeployment construct synthesizes a custom CloudFormation resource of type Custom::CDKBucketDeployment into the template. The source bucket/key is set to point to the assets bucket.
  3. The custom resource downloads the .zip archive, extracts it and issues aws s3 sync --delete against the destination bucket (in this case websiteBucket). If there is more than one source, the sources will be downloaded and merged pre-deployment at this step.

If you are referencing the filled bucket in another construct that depends on the files already be there, be sure to use deployment.deployedBucket. This will ensure the bucket deployment has finished before the resource that uses the bucket is created:

# website_bucket: s3.Bucket


deployment = s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=website_bucket
)

ConstructThatReadsFromTheBucket(self, "Consumer", {
    # Use 'deployment.deployedBucket' instead of 'websiteBucket' here
    "bucket": deployment.deployed_bucket
})

Supported sources

The following source types are supported for bucket deployments:

  • Local .zip file: s3deploy.Source.asset('/path/to/local/file.zip')
  • Local directory: s3deploy.Source.asset('/path/to/local/directory')
  • Another bucket: s3deploy.Source.bucket(bucket, zipObjectKey)
  • String data: s3deploy.Source.data('object-key.txt', 'hello, world!') (supports deploy-time values)
  • JSON data: s3deploy.Source.jsonData('object-key.json', { json: 'object' }) (supports deploy-time values)

To create a source from a single file, you can pass AssetOptions to exclude all but a single file:

  • Single file: s3deploy.Source.asset('/path/to/local/directory', { exclude: ['**', '!onlyThisFile.txt'] })

IMPORTANT The aws-s3-deployment module is only intended to be used with zip files from trusted sources. Directories bundled by the CDK CLI (by using Source.asset() on a directory) are safe. If you are using Source.asset() or Source.bucket() to reference an existing zip file, make sure you trust the file you are referencing. Zips from untrusted sources might be able to execute arbitrary code in the Lambda Function used by this module, and use its permissions to read or write unexpected files in the S3 bucket.

Retain on Delete

By default, the contents of the destination bucket will not be deleted when the BucketDeployment resource is removed from the stack or when the destination is changed. You can use the option retainOnDelete: false to disable this behavior, in which case the contents will be deleted.

Configuring this has a few implications you should be aware of:

  • Logical ID Changes

    Changing the logical ID of the BucketDeployment construct, without changing the destination (for example due to refactoring, or intentional ID change) will result in the deletion of the objects. This is because CloudFormation will first create the new resource, which will have no affect, followed by a deletion of the old resource, which will cause a deletion of the objects, since the destination hasn't changed, and retainOnDelete is false.

  • Destination Changes

    When the destination bucket or prefix is changed, all files in the previous destination will first be deleted and then uploaded to the new destination location. This could have availability implications on your users.

General Recommendations

Shared Bucket

If the destination bucket is not dedicated to the specific BucketDeployment construct (i.e shared by other entities), we recommend to always configure the destinationKeyPrefix property. This will prevent the deployment from accidentally deleting data that wasn't uploaded by it.

Dedicated Bucket

If the destination bucket is dedicated, it might be reasonable to skip the prefix configuration, in which case, we recommend to remove retainOnDelete: false, and instead, configure the autoDeleteObjects property on the destination bucket. This will avoid the logical ID problem mentioned above.

Prune

By default, files in the destination bucket that don't exist in the source will be deleted when the BucketDeployment resource is created or updated. You can use the option prune: false to disable this behavior, in which case the files will not be deleted.

# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "DeployMeWithoutDeletingFilesOnDestination",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=destination_bucket,
    prune=False
)

This option also enables you to multiple bucket deployments for the same destination bucket & prefix, each with its own characteristics. For example, you can set different cache-control headers based on file extensions:

# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "BucketDeployment",
    sources=[s3deploy.Source.asset("./website", exclude=["index.html"])],
    destination_bucket=destination_bucket,
    cache_control=[s3deploy.CacheControl.from_string("max-age=31536000,public,immutable")],
    prune=False
)

s3deploy.BucketDeployment(self, "HTMLBucketDeployment",
    sources=[s3deploy.Source.asset("./website", exclude=["*", "!index.html"])],
    destination_bucket=destination_bucket,
    cache_control=[s3deploy.CacheControl.from_string("max-age=0,no-cache,no-store,must-revalidate")],
    prune=False
)

Exclude and Include Filters

There are two points at which filters are evaluated in a deployment: asset bundling and the actual deployment. If you simply want to exclude files in the asset bundling process, you should leverage the exclude property of AssetOptions when defining your source:

# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "HTMLBucketDeployment",
    sources=[s3deploy.Source.asset("./website", exclude=["*", "!index.html"])],
    destination_bucket=destination_bucket
)

If you want to specify filters to be used in the deployment process, you can use the exclude and include filters on BucketDeployment. If excluded, these files will not be deployed to the destination bucket. In addition, if the file already exists in the destination bucket, it will not be deleted if you are using the prune option:

# destination_bucket: s3.Bucket

s3deploy.BucketDeployment(self, "DeployButExcludeSpecificFiles",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=destination_bucket,
    exclude=["*.txt"]
)

These filters follow the same format that is used for the AWS CLI. See the CLI documentation for information on Using Include and Exclude Filters.

Objects metadata

You can specify metadata to be set on all the objects in your deployment. There are 2 types of metadata in S3: system-defined metadata and user-defined metadata. System-defined metadata have a special purpose, for example cache-control defines how long to keep an object cached. User-defined metadata are not used by S3 and keys always begin with x-amz-meta- (this prefix is added automatically).

System defined metadata keys include the following:

  • cache-control (--cache-control in aws s3 sync)
  • content-disposition (--content-disposition in aws s3 sync)
  • content-encoding (--content-encoding in aws s3 sync)
  • content-language (--content-language in aws s3 sync)
  • content-type (--content-type in aws s3 sync)
  • expires (--expires in aws s3 sync)
  • x-amz-storage-class (--storage-class in aws s3 sync)
  • x-amz-website-redirect-location (--website-redirect in aws s3 sync)
  • x-amz-server-side-encryption (--sse in aws s3 sync)
  • x-amz-server-side-encryption-aws-kms-key-id (--sse-kms-key-id in aws s3 sync)
  • x-amz-server-side-encryption-customer-algorithm (--sse-c-copy-source in aws s3 sync)
  • x-amz-acl (--acl in aws s3 sync)

You can find more information about system defined metadata keys in S3 PutObject documentation and aws s3 sync documentation.

website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=website_bucket,
    destination_key_prefix="web/static",  # optional prefix in destination bucket
    metadata=s3deploy.UserDefinedObjectMetadata(A="1", b="2"),  # user-defined metadata

    # system-defined metadata
    content_type="text/html",
    content_language="en",
    storage_class=s3deploy.StorageClass.INTELLIGENT_TIERING,
    server_side_encryption=s3deploy.ServerSideEncryption.AES_256,
    cache_control=[
        s3deploy.CacheControl.set_public(),
        s3deploy.CacheControl.max_age(Duration.hours(1))
    ],
    access_control=s3.BucketAccessControl.BUCKET_OWNER_FULL_CONTROL
)

CloudFront Invalidation

You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.

import aws_cdk.aws_cloudfront as cloudfront
import aws_cdk.aws_cloudfront_origins as origins


bucket = s3.Bucket(self, "Destination")

# Handles buckets whether or not they are configured for website hosting.
distribution = cloudfront.Distribution(self, "Distribution",
    default_behavior=cloudfront.BehaviorOptions(origin=origins.S3Origin(bucket))
)

s3deploy.BucketDeployment(self, "DeployWithInvalidation",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=bucket,
    distribution=distribution,
    distribution_paths=["/images/*.png"]
)

Size Limits

The default memory limit for the deployment resource is 128MiB. If you need to copy larger files, you can use the memoryLimit configuration to increase the size of the AWS Lambda resource handler.

The default ephemeral storage size for the deployment resource is 512MiB. If you need to upload larger files, you may hit this limit. You can use the ephemeralStorageSize configuration to increase the storage size of the AWS Lambda resource handler.

NOTE: a new AWS Lambda handler will be created in your stack for each combination of memory and storage size.

EFS Support

If your workflow needs more disk space than default (512 MB) disk space, you may attach an EFS storage to underlying lambda function. To Enable EFS support set efs and vpc props for BucketDeployment.

Check sample usage below. Please note that creating VPC inline may cause stack deletion failures. It is shown as below for simplicity. To avoid such condition, keep your network infra (VPC) in a separate stack and pass as props.

# destination_bucket: s3.Bucket
# vpc: ec2.Vpc


s3deploy.BucketDeployment(self, "DeployMeWithEfsStorage",
    sources=[s3deploy.Source.asset(path.join(__dirname, "my-website"))],
    destination_bucket=destination_bucket,
    destination_key_prefix="efs/",
    use_efs=True,
    vpc=vpc,
    retain_on_delete=False
)

Data with deploy-time values

The content passed to Source.data() or Source.jsonData() can include references that will get resolved only during deployment.

For example:

import aws_cdk.aws_sns as sns

# destination_bucket: s3.Bucket
# topic: sns.Topic


app_config = {
    "topic_arn": topic.topic_arn,
    "base_url": "https://my-endpoint"
}

s3deploy.BucketDeployment(self, "BucketDeployment",
    sources=[s3deploy.Source.json_data("config.json", app_config)],
    destination_bucket=destination_bucket
)

The value in topic.topicArn is a deploy-time value. It only gets resolved during deployment by placing a marker in the generated source file and substituting it when its deployed to the destination with the actual value.

Notes

  • This library uses an AWS CloudFormation custom resource which is about 10MiB in size. The code of this resource is bundled with this library.

  • AWS Lambda execution time is limited to 15min. This limits the amount of data which can be deployed into the bucket by this timeout.

  • When the BucketDeployment is removed from the stack, the contents are retained in the destination bucket (#952).

  • If you are using s3deploy.Source.bucket() to take the file source from another bucket: the deployed files will only be updated if the key (file name) of the file in the source bucket changes. Mutating the file in place will not be good enough: the custom resource will simply not run if the properties don't change.

    • If you use assets (s3deploy.Source.asset()) you don't need to worry about this: the asset system will make sure that if the files have changed, the file name is unique and the deployment will run.

Development

The custom resource is implemented in Python 3.7 in order to be able to leverage the AWS CLI for "aws s3 sync". The code is under lib/lambda and unit tests are under test/lambda.

This package requires Python 3.7 during build time in order to create the custom resource Lambda bundle and test it. It also relies on a few bash scripts, so might be tricky to build on Windows.

Roadmap

  • Support "blue/green" deployments (#954)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws-cdk.aws-s3-deployment-1.204.0.tar.gz (110.4 kB view details)

Uploaded Source

Built Distribution

aws_cdk.aws_s3_deployment-1.204.0-py3-none-any.whl (109.0 kB view details)

Uploaded Python 3

File details

Details for the file aws-cdk.aws-s3-deployment-1.204.0.tar.gz.

File metadata

File hashes

Hashes for aws-cdk.aws-s3-deployment-1.204.0.tar.gz
Algorithm Hash digest
SHA256 049c127e5b74563685361a95b9ccf8d70884ccc1464c2e5a23597479849c5af7
MD5 7d46805ab90cf58a4d9d607e86083789
BLAKE2b-256 dec8c4c5d9e120218db237ff8fdcb23420cebbad2b74d5f573a8ee44d015a98e

See more details on using hashes here.

File details

Details for the file aws_cdk.aws_s3_deployment-1.204.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aws_cdk.aws_s3_deployment-1.204.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9550432929291ad77c8b4734a8c1cc1c0105725eed5a463d01226d50dffabd63
MD5 5f0c6af270e4decce9f2915d0e52529a
BLAKE2b-256 1a27a45ff318553126fe4b2cdd0b7653742bd0e5b3d76164fae6ee51834bb1c9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page