The CDK Construct Library for AWS::S3
Project description
Amazon S3 Construct Library
---Define an unencrypted S3 bucket.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
Bucket(self, "MyFirstBucket")
Bucket
constructs expose the following deploy-time attributes:
bucketArn
- the ARN of the bucket (i.e.arn:aws:s3:::bucket_name
)bucketName
- the name of the bucket (i.e.bucket_name
)bucketWebsiteUrl
- the Website URL of the bucket (i.e.http://bucket_name.s3-website-us-west-1.amazonaws.com
)bucketDomainName
- the URL of the bucket (i.e.bucket_name.s3.amazonaws.com
)bucketDualStackDomainName
- the dual-stack URL of the bucket (i.e.bucket_name.s3.dualstack.eu-west-1.amazonaws.com
)bucketRegionalDomainName
- the regional URL of the bucket (i.e.bucket_name.s3.eu-west-1.amazonaws.com
)arnForObjects(pattern)
- the ARN of an object or objects within the bucket (i.e.arn:aws:s3:::bucket_name/exampleobject.png
orarn:aws:s3:::bucket_name/Development/*
)urlForObject(key)
- the HTTP URL of an object within the bucket (i.e.https://s3.cn-north-1.amazonaws.com.cn/china-bucket/mykey
)virtualHostedUrlForObject(key)
- the virtual-hosted style HTTP URL of an object within the bucket (i.e.https://china-bucket-s3.cn-north-1.amazonaws.com.cn/mykey
)s3UrlForObject(key)
- the S3 URL of an object within the bucket (i.e.s3://bucket/mykey
)
Encryption
Define a KMS-encrypted bucket:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyEncryptedBucket",
encryption=BucketEncryption.KMS
)
# you can access the encryption key:
assert(bucket.encryption_key instanceof kms.Key)
You can also supply your own key:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
my_kms_key = kms.Key(self, "MyKey")
bucket = Bucket(self, "MyEncryptedBucket",
encryption=BucketEncryption.KMS,
encryption_key=my_kms_key
)
assert(bucket.encryption_key === my_kms_key)
Enable KMS-SSE encryption via S3 Bucket Keys:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyEncryptedBucket",
encryption=BucketEncryption.KMS,
bucket_key_enabled=True
)
assert(bucket.bucket_key_enabled === True)
Use BucketEncryption.ManagedKms
to use the S3 master KMS key:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "Buck",
encryption=BucketEncryption.KMS_MANAGED
)
assert(bucket.encryption_key == null)
Permissions
A bucket policy will be automatically created for the bucket upon the first call to
addToResourcePolicy(statement)
:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyBucket")
bucket.add_to_resource_policy(iam.PolicyStatement(
actions=["s3:GetObject"],
resources=[bucket.arn_for_objects("file.txt")],
principals=[iam.AccountRootPrincipal()]
))
The bucket policy can be directly accessed after creation to add statements or adjust the removal policy.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket.policy.apply_removal_policy(RemovalPolicy.RETAIN)
Most of the time, you won't have to manipulate the bucket policy directly. Instead, buckets have "grant" methods called to give prepackaged sets of permissions to other resources. For example:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
lambda_ = lambda_.Function(self, "Lambda")
bucket = Bucket(self, "MyBucket")
bucket.grant_read_write(lambda_)
Will give the Lambda's execution role permissions to read and write from the bucket.
AWS Foundational Security Best Practices
Enforcing SSL
To require all requests use Secure Socket Layer (SSL):
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "Bucket",
enforce_sSL=True
)
Sharing buckets between stacks
To use a bucket in a different stack in the same CDK application, pass the object to the other stack:
# Example automatically generated. See https://github.com/aws/jsii/issues/826
#
# Stack that defines the bucket
#
class Producer(cdk.Stack):
def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):
super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting)
bucket = s3.Bucket(self, "MyBucket",
removal_policy=cdk.RemovalPolicy.DESTROY
)
self.my_bucket = bucket
#
# Stack that consumes the bucket
#
class Consumer(cdk.Stack):
def __init__(self, scope, id, *, userBucket, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):
super().__init__(scope, id, userBucket=userBucket, description=description, env=env, stackName=stackName, tags=tags, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting)
user = iam.User(self, "MyUser")
user_bucket.grant_read_write(user)
producer = Producer(app, "ProducerStack")
Consumer(app, "ConsumerStack", user_bucket=producer.my_bucket)
Importing existing buckets
To import an existing bucket into your CDK application, use the Bucket.fromBucketAttributes
factory method. This method accepts BucketAttributes
which describes the properties of an already
existing bucket:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket.from_bucket_attributes(self, "ImportedBucket",
bucket_arn="arn:aws:s3:::my-bucket"
)
# now you can just call methods on the bucket
bucket.grant_read_write(user)
Alternatively, short-hand factories are available as Bucket.fromBucketName
and
Bucket.fromBucketArn
, which will derive all bucket attributes from the bucket
name or ARN respectively:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
by_name = Bucket.from_bucket_name(self, "BucketByName", "my-bucket")
by_arn = Bucket.from_bucket_arn(self, "BucketByArn", "arn:aws:s3:::my-bucket")
The bucket's region defaults to the current stack's region, but can also be explicitly set in cases where one of the bucket's regional properties needs to contain the correct values.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
my_cross_region_bucket = Bucket.from_bucket_attributes(self, "CrossRegionImport",
bucket_arn="arn:aws:s3:::my-bucket",
region="us-east-1"
)
Bucket Notifications
The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket as described under [S3 Bucket Notifications] of the S3 Developer Guide.
To subscribe for bucket notifications, use the bucket.addEventNotification
method. The
bucket.addObjectCreatedNotification
and bucket.addObjectRemovedNotification
can also be used for
these common use cases.
The following example will subscribe an SNS topic to be notified of all s3:ObjectCreated:*
events:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
import aws_cdk.aws_s3_notifications as s3n
my_topic = sns.Topic(self, "MyTopic")
bucket.add_event_notification(s3.EventType.OBJECT_CREATED, s3n.SnsDestination(topic))
This call will also ensure that the topic policy can accept notifications for this specific bucket.
Supported S3 notification targets are exposed by the @aws-cdk/aws-s3-notifications
package.
It is also possible to specify S3 object key filters when subscribing. The
following example will notify myQueue
when objects prefixed with foo/
and
have the .jpg
suffix are removed from the bucket.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket.add_event_notification(s3.EventType.OBJECT_REMOVED,
s3n.SqsDestination(my_queue), prefix="foo/", suffix=".jpg")
Block Public Access
Use blockPublicAccess
to specify block public access settings on the bucket.
Enable all block public access settings:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyBlockedBucket",
block_public_access=BlockPublicAccess.BLOCK_ALL
)
Block and ignore public ACLs:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyBlockedBucket",
block_public_access=BlockPublicAccess.BLOCK_ACLS
)
Alternatively, specify the settings manually:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyBlockedBucket",
block_public_access=BlockPublicAccess(block_public_policy=True)
)
When blockPublicPolicy
is set to true
, grantPublicRead()
throws an error.
Logging configuration
Use serverAccessLogsBucket
to describe where server access logs are to be stored.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
access_logs_bucket = Bucket(self, "AccessLogsBucket")
bucket = Bucket(self, "MyBucket",
server_access_logs_bucket=access_logs_bucket
)
It's also possible to specify a prefix for Amazon S3 to assign to all log object keys.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyBucket",
server_access_logs_bucket=access_logs_bucket,
server_access_logs_prefix="logs"
)
S3 Inventory
An inventory contains a list of the objects in the source bucket and metadata for each object. The inventory lists are stored in the destination bucket as a CSV file compressed with GZIP, as an Apache optimized row columnar (ORC) file compressed with ZLIB, or as an Apache Parquet (Parquet) file compressed with Snappy.
You can configure multiple inventory lists for a bucket. You can configure what object metadata to include in the inventory, whether to list all object versions or only current versions, where to store the inventory list file output, and whether to generate the inventory on a daily or weekly basis.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
inventory_bucket = s3.Bucket(self, "InventoryBucket")
data_bucket = s3.Bucket(self, "DataBucket",
inventories=[{
"frequency": s3.InventoryFrequency.DAILY,
"include_object_versions": s3.InventoryObjectVersion.CURRENT,
"destination": {
"bucket": inventory_bucket
}
}, {
"frequency": s3.InventoryFrequency.WEEKLY,
"include_object_versions": s3.InventoryObjectVersion.ALL,
"destination": {
"bucket": inventory_bucket,
"prefix": "with-all-versions"
}
}
]
)
If the destination bucket is created as part of the same CDK application, the necessary permissions will be automatically added to the bucket policy.
However, if you use an imported bucket (i.e Bucket.fromXXX()
), you'll have to make sure it contains the following policy document:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "InventoryAndAnalyticsExamplePolicy",
"Effect": "Allow",
"Principal": { "Service": "s3.amazonaws.com" },
"Action": "s3:PutObject",
"Resource": ["arn:aws:s3:::destinationBucket/*"]
}
]
}
Website redirection
You can use the two following properties to specify the bucket redirection policy. Please note that these methods cannot both be applied to the same bucket.
Static redirection
You can statically redirect a to a given Bucket URL or any other host name with websiteRedirect
:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyRedirectedBucket",
website_redirect={"host_name": "www.example.com"}
)
Routing rules
Alternatively, you can also define multiple websiteRoutingRules
, to define complex, conditional redirections:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyRedirectedBucket",
website_routing_rules=[{
"host_name": "www.example.com",
"http_redirect_code": "302",
"protocol": RedirectProtocol.HTTPS,
"replace_key": ReplaceKey.prefix_with("test/"),
"condition": {
"http_error_code_returned_equals": "200",
"key_prefix_equals": "prefix"
}
}]
)
Filling the bucket as part of deployment
To put files into a bucket as part of a deployment (for example, to host a
website), see the @aws-cdk/aws-s3-deployment
package, which provides a
resource that can do just that.
The URL for objects
S3 provides two types of URLs for accessing objects via HTTP(S). Path-Style and Virtual Hosted-Style URL. Path-Style is a classic way and will be deprecated. We recommend to use Virtual Hosted-Style URL for newly made bucket.
You can generate both of them.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket.url_for_object("objectname")# Path-Style URL
bucket.virtual_hosted_url_for_object("objectname")# Virtual Hosted-Style URL
bucket.virtual_hosted_url_for_object("objectname", regional=False)
Object Ownership
You can use the two following properties to specify the bucket object Ownership.
Object writer
The Uploading account will own the object.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
s3.Bucket(self, "MyBucket",
object_ownership=s3.ObjectOwnership.OBJECT_WRITER
)
Bucket owner preferred
The bucket owner will own the object if the object is uploaded with the bucket-owner-full-control canned ACL. Without this setting and canned ACL, the object is uploaded and remains owned by the uploading account.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
s3.Bucket(self, "MyBucket",
object_ownership=s3.ObjectOwnership.BUCKET_OWNER_PREFERRED
)
Bucket deletion
When a bucket is removed from a stack (or the stack is deleted), the S3
bucket will be removed according to its removal policy (which by default will
simply orphan the bucket and leave it in your AWS account). If the removal
policy is set to RemovalPolicy.DESTROY
, the bucket will be deleted as long
as it does not contain any objects.
To override this and force all objects to get deleted during bucket deletion,
enable theautoDeleteObjects
option.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
bucket = Bucket(self, "MyTempFileBucket",
removal_policy=RemovalPolicy.DESTROY,
auto_delete_objects=True
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file aws-cdk.aws-s3-1.106.0.tar.gz
.
File metadata
- Download URL: aws-cdk.aws-s3-1.106.0.tar.gz
- Upload date:
- Size: 272.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 644d86bebaa223f105360bf56545955afb687f0a0202ff1294dd61212180ba11 |
|
MD5 | 4bef60c8cc5c17974f4d11d0c4a76a12 |
|
BLAKE2b-256 | 0bbaf7d9651ae4480e36f708cc7f0063d0d71328dbbc88a504f6bc055940097c |
File details
Details for the file aws_cdk.aws_s3-1.106.0-py3-none-any.whl
.
File metadata
- Download URL: aws_cdk.aws_s3-1.106.0-py3-none-any.whl
- Upload date:
- Size: 270.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.6.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45d6018015394c75b050f1f14de05004f4e8e6306aec279820c5b501c53af358 |
|
MD5 | cec65bb31a51e17163553be3aca96eb6 |
|
BLAKE2b-256 | 5febe562d56787e3399c7f078e49ad10f8fa7599f1008da5fa327958da045c62 |