Skip to main content

Constructs for deploying contents to S3 buckets

Project description

AWS S3 Deployment Construct Library

---

Stability: Experimental

This is a developer preview (public beta) module. Releases might lack important features and might have future breaking changes.

This API is still under active development and subject to non-backward compatible changes or removal in any future version. Use of the API is not recommended in production environments. Experimental APIs are not subject to the Semantic Versioning model.


Status: Experimental

This library allows populating an S3 bucket with the contents of .zip files from other S3 buckets or from local disk.

The following example defines a publicly accessible S3 bucket with web hosting enabled and populates it from a local directory on disk.

# Example may have issues. See https://github.com/aws/jsii/issues/826
website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=website_bucket,
    destination_key_prefix="web/static"
)

This is what happens under the hood:

  1. When this stack is deployed (either via cdk deploy or via CI/CD), the contents of the local website-dist directory will be archived and uploaded to an intermediary assets bucket. If there is more than one source, they will be individually uploaded.
  2. The BucketDeployment construct synthesizes a custom CloudFormation resource of type Custom::CDKBucketDeployment into the template. The source bucket/key is set to point to the assets bucket.
  3. The custom resource downloads the .zip archive, extracts it and issues aws s3 sync --delete against the destination bucket (in this case websiteBucket). If there is more than one source, the sources will be downloaded and merged pre-deployment at this step.

Supported sources

The following source types are supported for bucket deployments:

  • Local .zip file: s3deploy.Source.asset('/path/to/local/file.zip')
  • Local directory: s3deploy.Source.asset('/path/to/local/directory')
  • Another bucket: s3deploy.Source.bucket(bucket, zipObjectKey)

Retain on Delete

By default, the contents of the destination bucket will be deleted when the BucketDeployment resource is removed from the stack or when the destination is changed. You can use the option retainOnDelete: true to disable this behavior, in which case the contents will be retained.

Objects metadata

You can specify metadata to be set on all the objects in your deployment. There are 2 types of metadata in S3: system-defined metadata and user-defined metadata. System-defined metadata have a special purpose, for example cache-control defines how long to keep an object cached. User-defined metadata are not used by S3 and keys always begin with x-amzn-meta- (if this is not provided, it is added automatically).

System defined metadata keys include the following:

  • cache-control
  • content-disposition
  • content-encoding
  • content-language
  • content-type
  • expires
  • server-side-encryption
  • storage-class
  • website-redirect-location
  • ssekms-key-id
  • sse-customer-algorithm
# Example may have issues. See https://github.com/aws/jsii/issues/826
website_bucket = s3.Bucket(self, "WebsiteBucket",
    website_index_document="index.html",
    public_read_access=True
)

s3deploy.BucketDeployment(self, "DeployWebsite",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=website_bucket,
    destination_key_prefix="web/static", # optional prefix in destination bucket
    user_metadata={"A": "1", "b": "2"}, # user-defined metadata

    # system-defined metadata
    content_type="text/html",
    content_language="en",
    storage_class=StorageClass.INTELLIGENT_TIERING,
    server_side_encryption=ServerSideEncryption.AES_256,
    cache_control=[CacheControl.set_public(), CacheControl.max_age(cdk.Duration.hours(1))]
)

CloudFront Invalidation

You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.

# Example may have issues. See https://github.com/aws/jsii/issues/826
bucket = s3.Bucket(self, "Destination")

distribution = cloudfront.CloudFrontWebDistribution(self, "Distribution",
    origin_configs=[{
        "s3_origin_source": {
            "s3_bucket_source": bucket
        },
        "behaviors": [{"is_default_behavior": True}]
    }
    ]
)

s3deploy.BucketDeployment(self, "DeployWithInvalidation",
    sources=[s3deploy.Source.asset("./website-dist")],
    destination_bucket=bucket,
    distribution=distribution,
    distribution_paths=["/images/*.png"]
)

Memory Limit

The default memory limit for the deployment resource is 128MiB. If you need to copy larger files, you can use the memoryLimit configuration to specify the size of the AWS Lambda resource handler.

NOTE: a new AWS Lambda handler will be created in your stack for each memory limit configuration.

Notes

  • This library uses an AWS CloudFormation custom resource which about 10MiB in size. The code of this resource is bundled with this library.
  • AWS Lambda execution time is limited to 15min. This limits the amount of data which can be deployed into the bucket by this timeout.
  • When the BucketDeployment is removed from the stack, the contents are retained in the destination bucket (#952).
  • Bucket deployment only happens during stack create/update. This means that if you wish to update the contents of the destination, you will need to change the source s3 key (or bucket), so that the resource will be updated. This is inline with best practices. If you use local disk assets, this will happen automatically whenever you modify the asset, since the S3 key is based on a hash of the asset contents.

Development

The custom resource is implemented in Python 3.6 in order to be able to leverage the AWS CLI for "aws sync". The code is under lambda/src and unit tests are under lambda/test.

This package requires Python 3.6 during build time in order to create the custom resource Lambda bundle and test it. It also relies on a few bash scripts, so might be tricky to build on Windows.

Roadmap

  • Support "progressive" mode (no --delete) (#953)
  • Support "blue/green" deployments (#954)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws-cdk.aws-s3-deployment-1.16.2.tar.gz (11.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aws_cdk.aws_s3_deployment-1.16.2-py3-none-any.whl (11.4 MB view details)

Uploaded Python 3

File details

Details for the file aws-cdk.aws-s3-deployment-1.16.2.tar.gz.

File metadata

  • Download URL: aws-cdk.aws-s3-deployment-1.16.2.tar.gz
  • Upload date:
  • Size: 11.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/39.0.1 requests-toolbelt/0.9.1 tqdm/4.38.0 CPython/3.6.5

File hashes

Hashes for aws-cdk.aws-s3-deployment-1.16.2.tar.gz
Algorithm Hash digest
SHA256 4c0beca7a01379685c56935074a8fccc835e164f48c8e2a4d74e73acee7cf7cc
MD5 ddf147b75e910e9d2dbbbcae1eb897fc
BLAKE2b-256 ecb3bd1b662638a649d3c917a4085808fa03d3e3e8b8881c3b06e15a120db9e9

See more details on using hashes here.

File details

Details for the file aws_cdk.aws_s3_deployment-1.16.2-py3-none-any.whl.

File metadata

  • Download URL: aws_cdk.aws_s3_deployment-1.16.2-py3-none-any.whl
  • Upload date:
  • Size: 11.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/39.0.1 requests-toolbelt/0.9.1 tqdm/4.38.0 CPython/3.6.5

File hashes

Hashes for aws_cdk.aws_s3_deployment-1.16.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9dcbee6f6a63d16b121ea805f99b533404f5ca1f1ed9c54dd5f33d9c624c9309
MD5 3f76e5159170a67a25ddcde9f23f5159
BLAKE2b-256 161d206017e95fe0711d4ac8d6e2f10c3533b2d7414f108ed706b6e42a472fb7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page