Continuous Delivery of CDK applications
Project description
CDK Pipelines
---The APIs of higher level constructs in this module are in developer preview before they become stable. We will only make breaking changes to address unforeseen API issues. Therefore, these APIs are not subject to Semantic Versioning, and breaking changes will be announced in release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.
A construct library for painless Continuous Delivery of CDK applications.
This module is in developer preview. We may make breaking changes to address unforeseen API issues. Therefore, these APIs are not subject to Semantic Versioning, and breaking changes will be announced in release notes. This means that while you may use them, you may need to update your source code when upgrading to a newer version of this package.
At a glance
Defining a pipeline for your application is as simple as defining a subclass
of Stage
, and calling pipeline.addApplicationStage()
with instances of
that class. Deploying to a different account or region looks exactly the
same, the CDK Pipelines library takes care of the details.
(Note that have to bootstrap all environments before the following code will work, see the section CDK Environment Bootstrapping below).
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# The stacks for our app are defined in my-stacks.ts. The internals of these
# stacks aren't important, except that DatabaseStack exposes an attribute
# "table" for a database table it defines, and ComputeStack accepts a reference
# to this table in its properties.
#
from ...lib.my_stacks import DatabaseStack, ComputeStack
from aws_cdk.core import Construct, Stage, Stack, StackProps, StageProps
from aws_cdk.pipelines import CdkPipeline
import aws_cdk.aws_codepipeline as codepipeline
#
# Your application
#
# May consist of one or more Stacks (here, two)
#
# By declaring our DatabaseStack and our ComputeStack inside a Stage,
# we make sure they are deployed together, or not at all.
#
class MyApplication(Stage):
def __init__(self, scope, id, *, env=None, outdir=None):
super().__init__(scope, id, env=env, outdir=outdir)
db_stack = DatabaseStack(self, "Database")
ComputeStack(self, "Compute",
table=db_stack.table
)
#
# Stack to hold the pipeline
#
class MyPipelineStack(Stack):
def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):
super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting)
source_artifact = codepipeline.Artifact()
cloud_assembly_artifact = codepipeline.Artifact()
pipeline = CdkPipeline(self, "Pipeline")
# Do this as many times as necessary with any account and region
# Account and region may different from the pipeline's.
pipeline.add_application_stage(MyApplication(self, "Prod",
env=Environment(
account="123456789012",
region="eu-west-1"
)
))
The pipeline is self-mutating, which means that if you add new
application stages in the source code, or new stacks to MyApplication
, the
pipeline will automatically reconfigure itself to deploy those new stages and
stacks.
CDK Versioning
This library uses prerelease features of the CDK framework, which can be enabled by adding the
following to cdk.json
:
{
// ...
"context": {
"@aws-cdk/core:newStyleStackSynthesis": true
}
}
A note on cost
By default, the CdkPipeline
construct creates an AWS Key Management Service
(AWS KMS) Customer Master Key (CMK) for you to encrypt the artifacts in the
artifact bucket, which incurs a cost of
$1/month. This default configuration is necessary to allow cross-account
deployments.
If you do not intend to perform cross-account deployments, you can disable
the creation of the Customer Master Keys by passing crossAccountKeys: false
when defining the Pipeline:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
pipeline = pipelines.CdkPipeline(self, "Pipeline",
cross_account_keys=False
)
Defining the Pipeline (Source and Synth)
The pipeline is defined by instantiating CdkPipeline
in a Stack. This defines the
source location for the pipeline as well as the build commands. For example, the following
defines a pipeline whose source is stored in a GitHub repository, and uses NPM
to build. The Pipeline will be provisioned in account 111111111111
and region
eu-west-1
:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
class MyPipelineStack(Stack):
def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):
super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags, synthesizer=synthesizer, terminationProtection=terminationProtection, analyticsReporting=analyticsReporting)
source_artifact = codepipeline.Artifact()
cloud_assembly_artifact = codepipeline.Artifact()
pipeline = CdkPipeline(self, "Pipeline",
pipeline_name="MyAppPipeline",
cloud_assembly_artifact=cloud_assembly_artifact,
source_action=codepipeline_actions.GitHubSourceAction(
action_name="GitHub",
output=source_artifact,
oauth_token=SecretValue.secrets_manager("GITHUB_TOKEN_NAME"),
# Replace these with your actual GitHub project name
owner="OWNER",
repo="REPO",
branch="main"
),
synth_action=SimpleSynthAction.standard_npm_synth(
source_artifact=source_artifact,
cloud_assembly_artifact=cloud_assembly_artifact,
# Optionally specify a VPC in which the action runs
vpc=ec2.Vpc(self, "NpmSynthVpc"),
# Use this if you need a build step (if you're not using ts-node
# or if you have TypeScript Lambdas that need to be compiled).
build_command="npm run build"
)
)
app = App()
MyPipelineStack(app, "PipelineStack",
env=Environment(
account="111111111111",
region="eu-west-1"
)
)
If you prefer more control over the underlying CodePipeline object, you can create one yourself, including custom Source and Build stages:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
code_pipeline = cp.Pipeline(pipeline_stack, "CodePipeline",
stages=[{
"stage_name": "CustomSource",
"actions": [...]
}, {
"stage_name": "CustomBuild",
"actions": [...]
}
]
)
app = App()
cdk_pipeline = CdkPipeline(app, "CdkPipeline",
code_pipeline=code_pipeline,
cloud_assembly_artifact=cloud_assembly_artifact
)
If you use assets for files or Docker images, every asset will get its own upload action during the asset stage.
By setting the value singlePublisherPerType
to true
, only one action for files and one action for
Docker images is created that handles all assets of the respective type.
If you need to run commands to setup proxies, mirrors, etc you can supply them using the assetPreInstallCommands
.
Initial pipeline deployment
You provision this pipeline by making sure the target environment has been
bootstrapped (see below), and then executing deploying the PipelineStack
once. Afterwards, the pipeline will keep itself up-to-date.
Important: be sure to
git commit
andgit push
before deploying the Pipeline stack usingcdk deploy
!The reason is that the pipeline will start deploying and self-mutating right away based on the sources in the repository, so the sources it finds in there should be the ones you want it to find.
Run the following commands to get the pipeline going:
$ git commit -a
$ git push
$ cdk deploy PipelineStack
Administrative permissions to the account are only necessary up until this point. We recommend you shed access to these credentials after doing this.
Sources
Any of the regular sources from the @aws-cdk/aws-codepipeline-actions
module can be used.
Synths
You define how to build and synth the project by specifying a synthAction
.
This can be any CodePipeline action that produces an artifact with a CDK
Cloud Assembly in it (the contents of the cdk.out
directory created when
cdk synth
is called). Pass the output artifact of the synth in the
Pipeline's cloudAssemblyArtifact
property.
SimpleSynthAction
is available for synths that can be performed by running a couple
of simple shell commands (install, build, and synth) using AWS CodeBuild. When
using these, the source repository does not need to have a buildspec.yml
. An example
of using SimpleSynthAction
to run a Maven build followed by a CDK synth:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
pipeline = CdkPipeline(self, "Pipeline",
# ...
synth_action=SimpleSynthAction(
source_artifact=source_artifact,
cloud_assembly_artifact=cloud_assembly_artifact,
install_commands=["npm install -g aws-cdk"],
build_commands=["mvn package"],
synth_command="cdk synth"
)
)
Available as factory functions on SimpleSynthAction
are some common
convention-based synth:
SimpleSynthAction.standardNpmSynth()
: build using NPM conventions. Expects apackage-lock.json
, acdk.json
, and expects the CLI to be a versioned dependency inpackage.json
. Does not perform a build step by default.CdkSynth.standardYarnSynth()
: build using Yarn conventions. Expects ayarn.lock
acdk.json
, and expects the CLI to be a versioned dependency inpackage.json
. Does not perform a build step by default.
If you need a custom build/synth step that is not covered by SimpleSynthAction
, you can
always add a custom CodeBuild project and pass a corresponding CodeBuildAction
to the
pipeline.
Adding Application Stages
To define an application that can be added to the pipeline integrally, define a subclass
of Stage
. The Stage
can contain one or more stack which make up your application. If
there are dependencies between the stacks, the stacks will automatically be added to the
pipeline in the right order. Stacks that don't depend on each other will be deployed in
parallel. You can add a dependency relationship between stacks by calling
stack1.addDependency(stack2)
.
Stages take a default env
argument which the Stacks inside the Stage will fall back to
if no env
is defined for them.
An application is added to the pipeline by calling addApplicationStage()
with instances
of the Stage. The same class can be instantiated and added to the pipeline multiple times
to define different stages of your DTAP or multi-region application pipeline:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# Testing stage
pipeline.add_application_stage(MyApplication(self, "Testing",
env={"account": "111111111111", "region": "eu-west-1"}
))
# Acceptance stage
pipeline.add_application_stage(MyApplication(self, "Acceptance",
env={"account": "222222222222", "region": "eu-west-1"}
))
# Production stage
pipeline.add_application_stage(MyApplication(self, "Production",
env={"account": "333333333333", "region": "eu-west-1"}
))
Be aware that adding new stages via
addApplicationStage()
will automatically add them to the pipeline and deploy the new stacks, but removing them from the pipeline or deleting the pipeline stack will not automatically delete deployed application stacks. You must delete those stacks by hand using the AWS CloudFormation console or the AWS CLI.
More Control
Every Application Stage added by addApplicationStage()
will lead to the addition of
an individual Pipeline Stage, which is subsequently returned. You can add more
actions to the stage by calling addAction()
on it. For example:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
testing_stage = pipeline.add_application_stage(MyApplication(self, "Testing",
env={"account": "111111111111", "region": "eu-west-1"}
))
# Add a action -- in this case, a Manual Approval action
# (for illustration purposes: testingStage.addManualApprovalAction() is a
# convenience shorthand that does the same)
testing_stage.add_action(ManualApprovalAction(
action_name="ManualApproval",
run_order=testing_stage.next_sequential_run_order()
))
You can also add more than one Application Stage to one Pipeline Stage. For example:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# Create an empty pipeline stage
testing_stage = pipeline.add_stage("Testing")
# Add two application stages to the same pipeline stage
testing_stage.add_application(MyApplication1(self, "MyApp1",
env={"account": "111111111111", "region": "eu-west-1"}
))
testing_stage.add_application(MyApplication2(self, "MyApp2",
env={"account": "111111111111", "region": "eu-west-1"}
))
Even more, adding a manual approval action or reserving space for some extra sequential actions between 'Prepare' and 'Execute' ChangeSet actions is possible.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
pipeline.add_application_stage(MyApplication(self, "Production"),
manual_approvals=True,
extra_run_order_space=1
)
Adding validations to the pipeline
You can add any type of CodePipeline Action to the pipeline in order to validate the deployments you are performing.
The CDK Pipelines construct library comes with a ShellScriptAction
which uses AWS CodeBuild
to run a set of shell commands (potentially running a test set that comes with your application,
using stack outputs of the deployed stacks).
In its simplest form, adding validation actions looks like this:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
stage = pipeline.add_application_stage(MyApplication())
stage.add_actions(ShellScriptAction(
action_name="MyValidation",
commands=["curl -Ssf https://my.webservice.com/"],
# Optionally specify a VPC if, for example, the service is deployed with a private load balancer
vpc=vpc,
# Optionally specify SecurityGroups
security_groups=security_groups,
# Optionally specify a BuildEnvironment
environment=environment
))
Using CloudFormation Stack Outputs in ShellScriptAction
Because many CloudFormation deployments result in the generation of resources with unpredictable names, validations have support for reading back CloudFormation Outputs after a deployment. This makes it possible to pass (for example) the generated URL of a load balancer to the test set.
To use Stack Outputs, expose the CfnOutput
object you're interested in, and
call pipeline.stackOutput()
on it:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
class MyLbApplication(Stage):
def __init__(self, scope, id, props=None):
super().__init__(scope, id, props)
lb_stack = LoadBalancerStack(self, "Stack")
# Or create this in `LoadBalancerStack` directly
self.load_balancer_address = CfnOutput(lb_stack, "LbAddress",
value=f"https://{lbStack.loadBalancer.loadBalancerDnsName}/"
)
lb_app = MyLbApplication(self, "MyApp",
env={}
)
stage = pipeline.add_application_stage(lb_app)
stage.add_actions(ShellScriptAction(
# ...
use_outputs={
# When the test is executed, this will make $URL contain the
# load balancer address.
"URL": pipeline.stack_output(lb_app.load_balancer_address)
}
))
Using additional files in Shell Script Actions
As part of a validation, you probably want to run a test suite that's more
elaborate than what can be expressed in a couple of lines of shell script.
You can bring additional files into the shell script validation by supplying
the additionalArtifacts
property.
Here are some typical examples for how you might want to bring in additional files from several sources:
- Directory from the source repository
- Additional compiled artifacts from the synth step
Controlling IAM permissions
IAM permissions can be added to the execution role of a ShellScriptAction
in
two ways.
Either pass additional policy statements in the rolePolicyStatements
property:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
ShellScriptAction(
# ...
role_policy_statements=[
iam.PolicyStatement(
actions=["s3:GetObject"],
resources=["*"]
)
]
)
The Action can also be used as a Grantable after having been added to a Pipeline:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
action = ShellScriptAction()
pipeline.add_stage("Test").add_actions(action)
bucket.grant_read(action)
Additional files from the source repository
Bringing in additional files from the source repository is appropriate if the
files in the source repository are directly usable in the test (for example,
if they are executable shell scripts themselves). Pass the sourceArtifact
:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
source_artifact = codepipeline.Artifact()
pipeline = CdkPipeline(self, "Pipeline")
validation_action = ShellScriptAction(
action_name="TestUsingSourceArtifact",
additional_artifacts=[source_artifact],
# 'test.sh' comes from the source repository
commands=["./test.sh"]
)
Additional files from the synth step
Getting the additional files from the synth step is appropriate if your tests need the compilation step that is done as part of synthesis.
On the synthesis step, specify additionalArtifacts
to package
additional subdirectories into artifacts, and use the same artifact
in the ShellScriptAction
's additionalArtifacts
:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
# If you are using additional output artifacts from the synth step,
# they must be named.
cloud_assembly_artifact = codepipeline.Artifact("CloudAsm")
integ_tests_artifact = codepipeline.Artifact("IntegTests")
pipeline = CdkPipeline(self, "Pipeline",
synth_action=SimpleSynthAction.standard_npm_synth(
source_artifact=source_artifact,
cloud_assembly_artifact=cloud_assembly_artifact,
build_commands=["npm run build"],
additional_artifacts=[{
"directory": "test",
"artifact": integ_tests_artifact
}
]
)
)
validation_action = ShellScriptAction(
action_name="TestUsingBuildArtifact",
additional_artifacts=[integ_tests_artifact],
# 'test.js' was produced from 'test/test.ts' during the synth step
commands=["node ./test.js"]
)
Add Additional permissions to the CodeBuild Project Role for building and synthesizing
You can customize the role permissions used by the CodeBuild project so it has access to the needed resources. eg: Adding CodeArtifact repo permissions so we pull npm packages from the CA repo instead of NPM.
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
class MyPipelineStack(Stack):
def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None, synthesizer=None, terminationProtection=None, analyticsReporting=None):
pipeline = CdkPipeline(self, "Pipeline",
(SpreadAssignment ...
synthAction
synth_action), SimpleSynthAction=SimpleSynthAction, =.standard_npm_synth(
source_artifact=source_artifact,
cloud_assembly_artifact=cloud_assembly_artifact,
# Use this to customize and a permissions required for the build
# and synth
role_policy_statements=[
PolicyStatement(
actions=["codeartifact:*", "sts:GetServiceBearerToken"],
resources=["arn:codeartifact:repo:arn"]
)
],
# Then you can login to codeartifact repository
# and npm will now pull packages from your repository
# Note the codeartifact login command requires more params to work.
build_commands=["aws codeartifact login --tool npm", "npm run build"
]
)
)
Developing the pipeline
The self-mutation feature of the CdkPipeline
might at times get in the way
of the pipeline development workflow. Each change to the pipeline must be pushed
to git, otherwise, after the pipeline was updated using cdk deploy
, it will
automatically revert to the state found in git.
To make the development more convenient, the self-mutation feature can be turned
off temporarily, by passing selfMutating: false
property, example:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
pipeline = CdkPipeline(self, "Pipeline",
self_mutating=False, ...
)
CDK Environment Bootstrapping
An environment is an (account, region) pair where you want to deploy a CDK stack (see Environments in the CDK Developer Guide). In a Continuous Deployment pipeline, there are at least two environments involved: the environment where the pipeline is provisioned, and the environment where you want to deploy the application (or different stages of the application). These can be the same, though best practices recommend you isolate your different application stages from each other in different AWS accounts or regions.
Before you can provision the pipeline, you have to bootstrap the environment you want to create it in. If you are deploying your application to different environments, you also have to bootstrap those and be sure to add a trust relationship.
This library requires a newer version of the bootstrapping stack which has been updated specifically to support cross-account continuous delivery. In the future, this new bootstrapping stack will become the default, but for now it is still opt-in.
The commands below assume you are running
cdk bootstrap
in a directory wherecdk.json
contains the"@aws-cdk/core:newStyleStackSynthesis": true
setting in its context, which will switch to the new bootstrapping stack automatically.If run from another directory, be sure to run the bootstrap command with the environment variable
CDK_NEW_BOOTSTRAP=1
set.
To bootstrap an environment for provisioning the pipeline:
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
[--profile admin-profile-1] \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
aws://111111111111/us-east-1
To bootstrap a different environment for deploying CDK applications into using
a pipeline in account 111111111111
:
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
[--profile admin-profile-2] \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
--trust 11111111111 \
aws://222222222222/us-east-2
If you only want to trust an account to do lookups (e.g, when your CDK application has a
Vpc.fromLookup()
call), use the option --trust-for-lookup
:
$ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
[--profile admin-profile-2] \
--cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
--trust-for-lookup 11111111111 \
aws://222222222222/us-east-2
These command lines explained:
npx
: means to use the CDK CLI from the current NPM install. If you are using a global install of the CDK CLI, leave this out.--profile
: should indicate a profile with administrator privileges that has permissions to provision a pipeline in the indicated account. You can leave this flag out if either the AWS default credentials or theAWS_*
environment variables confer these permissions.--cloudformation-execution-policies
: ARN of the managed policy that future CDK deployments should execute with. You can tailor this to the needs of your organization and give more constrained permissions thanAdministratorAccess
.--trust
: indicates which other account(s) should have permissions to deploy CDK applications into this account. In this case we indicate the Pipeline's account, but you could also use this for developer accounts (don't do that for production application accounts though!).--trust-for-lookup
: similar to--trust
, but gives a more limited set of permissions to the trusted account, allowing it to only look up values, such as availability zones, EC2 images and VPCs. Note that if you provide an account using--trust
, that account can also do lookups. So you only need to pass--trust-for-lookup
if you need to use a different account.aws://222222222222/us-east-2
: the account and region we're bootstrapping.
Security tip: we recommend that you use administrative credentials to an account only to bootstrap it and provision the initial pipeline. Otherwise, access to administrative credentials should be dropped as soon as possible.
On the use of AdministratorAccess: The use of the
AdministratorAccess
policy ensures that your pipeline can deploy every type of AWS resource to your account. Make sure you trust all the code and dependencies that make up your CDK app. Check with the appropriate department within your organization to decide on the proper policy to use.If your policy includes permissions to create on attach permission to a role, developers can escalate their privilege with more permissive permission. Thus, we recommend implementing permissions boundary in the CDK Execution role. To do this, you can bootstrap with the
--template
option with a customized template that contains a permission boundary.
Migrating from old bootstrap stack
The bootstrap stack is a CloudFormation stack in your account named CDKToolkit that provisions a set of resources required for the CDK to deploy into that environment.
The "new" bootstrap stack (obtained by running cdk bootstrap
with
CDK_NEW_BOOTSTRAP=1
) is slightly more elaborate than the "old" stack. It
contains:
- An S3 bucket and ECR repository with predictable names, so that we can reference assets in these storage locations without the use of CloudFormation template parameters.
- A set of roles with permissions to access these asset locations and to execute
CloudFormation, assumable from whatever accounts you specify under
--trust
.
It is possible and safe to migrate from the old bootstrap stack to the new bootstrap stack. This will create a new S3 file asset bucket in your account and orphan the old bucket. You should manually delete the orphaned bucket after you are sure you have redeployed all CDK applications and there are no more references to the old asset bucket.
Security Tips
It's important to stay safe while employing Continuous Delivery. The CDK Pipelines library comes with secure defaults to the best of our ability, but by its very nature the library cannot take care of everything.
We therefore expect you to mind the following:
- Maintain dependency hygiene and vet 3rd-party software you use. Any software you run on your build machine has the ability to change the infrastructure that gets deployed. Be careful with the software you depend on.
- Use dependency locking to prevent accidental upgrades! The default
CdkSynths
that come with CDK Pipelines will expectpackage-lock.json
andyarn.lock
to ensure your dependencies are the ones you expect. - Credentials to production environments should be short-lived. After bootstrapping and the initial pipeline provisioning, there is no more need for developers to have access to any of the account credentials; all further changes can be deployed through git. Avoid the chances of credentials leaking by not having them in the first place!
Troubleshooting
Here are some common errors you may encounter while using this library.
Pipeline: Internal Failure
If you see the following error during deployment of your pipeline:
CREATE_FAILED | AWS::CodePipeline::Pipeline | Pipeline/Pipeline
Internal Failure
There's something wrong with your GitHub access token. It might be missing, or not have the right permissions to access the repository you're trying to access.
Key: Policy contains a statement with one or more invalid principals
If you see the following error during deployment of your pipeline:
CREATE_FAILED | AWS::KMS::Key | Pipeline/Pipeline/ArtifactsBucketEncryptionKey
Policy contains a statement with one or more invalid principals.
One of the target (account, region) environments has not been bootstrapped with the new bootstrap stack. Check your target environments and make sure they are all bootstrapped.
is in ROLLBACK_COMPLETE state and can not be updated
If you see the following error during execution of your pipeline:
Stack ... is in ROLLBACK_COMPLETE state and can not be updated. (Service:
AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request
ID: ...)
The stack failed its previous deployment, and is in a non-retryable state. Go into the CloudFormation console, delete the stack, and retry the deployment.
Cannot find module 'xxxx' or its corresponding type declarations
You may see this if you are using TypeScript or other NPM-based languages,
when using NPM 7 on your workstation (where you generate package-lock.json
)
and NPM 6 on the CodeBuild image used for synthesizing.
It looks like NPM 7 has started writing less information to package-lock.json
,
leading NPM 6 reading that same file to not install all required packages anymore.
Make sure you are using the same NPM version everywhere, either downgrade your workstation's version or upgrade the CodeBuild version.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock
If, in the 'Synth' action (inside the 'Build' stage) of your pipeline, you get an error like this:
stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
It means that the AWS CodeBuild project for 'Synth' is not configured to run in privileged mode,
which prevents Docker builds from happening. This typically happens if you use a CDK construct
that bundles asset using tools run via Docker, like aws-lambda-nodejs
, aws-lambda-python
,
aws-lambda-go
and others.
Make sure you set the privileged
environment variable to true
in the synth definition:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
pipeline = CdkPipeline(self, "MyPipeline",
(SpreadAssignment ...
synthAction
synth_action), SimpleSynthAction=SimpleSynthAction, =.standard_npm_synth(
source_artifact=, ...,
cloud_assembly_artifact=, ...,
environment={
"privileged": True
}
)
)
After turning on privilegedMode: true
, you will need to do a one-time manual cdk deploy of your
pipeline to get it going again (as with a broken 'synth' the pipeline will not be able to self
update to the right state).
S3 error: Access Denied
Some constructs, such as EKS clusters, generate nested stacks. When CloudFormation tries to deploy those stacks, it may fail with this error:
S3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
This happens because the pipeline is not self-mutating and, as a consequence, the FileAssetX
build projects get out-of-sync with the generated templates. To fix this, make sure the
selfMutating
property is set to true
:
# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
pipeline = CdkPipeline(self, "MyPipeline",
self_mutating=True, ...
)
Current Limitations
Limitations that we are aware of and will address:
- No context queries: context queries are not supported. That means that Vpc.fromLookup() and other functions like it will not work #8905.
Known Issues
There are some usability issues that are caused by underlying technology, and cannot be remedied by CDK at this point. They are reproduced here for completeness.
- Console links to other accounts will not work: the AWS CodePipeline console will assume all links are relative to the current account. You will not be able to use the pipeline console to click through to a CloudFormation stack in a different account.
- If a change set failed to apply the pipeline must restarted: if a change set failed to apply, it cannot be retried. The pipeline must be restarted from the top by clicking Release Change.
- A stack that failed to create must be deleted manually: if a stack failed to create on the first attempt, you must delete it using the CloudFormation console before starting the pipeline again by clicking Release Change.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file aws-cdk.pipelines-1.111.0.tar.gz
.
File metadata
- Download URL: aws-cdk.pipelines-1.111.0.tar.gz
- Upload date:
- Size: 201.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.6.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8253b5e61b6ee2e7251d463fc79dd7bce4d1b35beeeee6c10837537c678effe3 |
|
MD5 | d31ec4f6afc68fd820d2b3d284f6fb7e |
|
BLAKE2b-256 | fe4acbc3cda05a6fde8dfb1d1199d05dbecd50f8d445f3c9c1ed465826898da5 |
File details
Details for the file aws_cdk.pipelines-1.111.0-py3-none-any.whl
.
File metadata
- Download URL: aws_cdk.pipelines-1.111.0-py3-none-any.whl
- Upload date:
- Size: 180.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.6.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 40521b7a6e3628ab88d10bbbe13c1df30870b5f09f402ab5a1e917223412f9dd |
|
MD5 | afabc4bff560bf0cb0c3314f94758b9d |
|
BLAKE2b-256 | 42a48474377d7e4c06394239ef9f79a63942687e011c29e9536f6bf9c8866714 |