Skip to main content

Better interface to AWS Code Pipeline

Project description

AWS CodePipeline Construct Library

---

End-of-Support

AWS CDK v1 has reached End-of-Support on 2023-06-01. This package is no longer being updated, and users should migrate to AWS CDK v2.

For more information on how to migrate, see the Migrating to AWS CDK v2 guide.


Pipeline

To construct an empty Pipeline:

# Construct an empty Pipeline
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline")

To give the Pipeline a nice, human-readable name:

# Give the Pipeline a nice, human-readable name
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    pipeline_name="MyPipeline"
)

Be aware that in the default configuration, the Pipeline construct creates an AWS Key Management Service (AWS KMS) Customer Master Key (CMK) for you to encrypt the artifacts in the artifact bucket, which incurs a cost of $1/month. This default configuration is necessary to allow cross-account actions.

If you do not intend to perform cross-account deployments, you can disable the creation of the Customer Master Keys by passing crossAccountKeys: false when defining the Pipeline:

# Don't create Customer Master Keys
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    cross_account_keys=False
)

If you want to enable key rotation for the generated KMS keys, you can configure it by passing enableKeyRotation: true when creating the pipeline. Note that key rotation will incur an additional cost of $1/month.

# Enable key rotation for the generated KMS key
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    # ...
    enable_key_rotation=True
)

Stages

You can provide Stages when creating the Pipeline:

# Provide a Stage when creating a pipeline
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    stages=[codepipeline.StageProps(
        stage_name="Source",
        actions=[]
    )
    ]
)

Or append a Stage to an existing Pipeline:

# Append a Stage to an existing Pipeline
# pipeline: codepipeline.Pipeline

source_stage = pipeline.add_stage(
    stage_name="Source",
    actions=[]
)

You can insert the new Stage at an arbitrary point in the Pipeline:

# Insert a new Stage at an arbitrary point
# pipeline: codepipeline.Pipeline
# another_stage: codepipeline.IStage
# yet_another_stage: codepipeline.IStage


some_stage = pipeline.add_stage(
    stage_name="SomeStage",
    placement=codepipeline.StagePlacement(
        # note: you can only specify one of the below properties
        right_before=another_stage,
        just_after=yet_another_stage
    )
)

You can disable transition to a Stage:

# Disable transition to a stage
# pipeline: codepipeline.Pipeline


some_stage = pipeline.add_stage(
    stage_name="SomeStage",
    transition_to_enabled=False,
    transition_disabled_reason="Manual transition only"
)

This is useful if you don't want every executions of the pipeline to flow into this stage automatically. The transition can then be "manually" enabled later on.

Actions

Actions live in a separate package, @aws-cdk/aws-codepipeline-actions.

To add an Action to a Stage, you can provide it when creating the Stage, in the actions property, or you can use the IStage.addAction() method to mutate an existing Stage:

# Use the `IStage.addAction()` method to mutate an existing Stage.
# source_stage: codepipeline.IStage
# some_action: codepipeline.Action

source_stage.add_action(some_action)

Custom Action Registration

To make your own custom CodePipeline Action requires registering the action provider. Look to the JenkinsProvider in @aws-cdk/aws-codepipeline-actions for an implementation example.

# Make a custom CodePipeline Action
codepipeline.CustomActionRegistration(self, "GenericGitSourceProviderResource",
    category=codepipeline.ActionCategory.SOURCE,
    artifact_bounds=codepipeline.ActionArtifactBounds(min_inputs=0, max_inputs=0, min_outputs=1, max_outputs=1),
    provider="GenericGitSource",
    version="1",
    entity_url="https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html",
    execution_url="https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-create-custom-action.html",
    action_properties=[codepipeline.CustomActionProperty(
        name="Branch",
        required=True,
        key=False,
        secret=False,
        queryable=False,
        description="Git branch to pull",
        type="String"
    ), codepipeline.CustomActionProperty(
        name="GitUrl",
        required=True,
        key=False,
        secret=False,
        queryable=False,
        description="SSH git clone URL",
        type="String"
    )
    ]
)

Cross-account CodePipelines

Cross-account Pipeline actions require that the Pipeline has not been created with crossAccountKeys: false.

Most pipeline Actions accept an AWS resource object to operate on. For example:

  • S3DeployAction accepts an s3.IBucket.
  • CodeBuildAction accepts a codebuild.IProject.
  • etc.

These resources can be either newly defined (new s3.Bucket(...)) or imported (s3.Bucket.fromBucketAttributes(...)) and identify the resource that should be changed.

These resources can be in different accounts than the pipeline itself. For example, the following action deploys to an imported S3 bucket from a different account:

# Deploy an imported S3 bucket from a different account
# stage: codepipeline.IStage
# input: codepipeline.Artifact

stage.add_action(codepipeline_actions.S3DeployAction(
    bucket=s3.Bucket.from_bucket_attributes(self, "Bucket",
        account="123456789012"
    ),
    input=input,
    action_name="s3-deploy-action"
))

Actions that don't accept a resource object accept an explicit account parameter:

# Actions that don't accept a resource objet accept an explicit `account` parameter
# stage: codepipeline.IStage
# template_path: codepipeline.ArtifactPath

stage.add_action(codepipeline_actions.CloudFormationCreateUpdateStackAction(
    account="123456789012",
    template_path=template_path,
    admin_permissions=False,
    stack_name=Stack.of(self).stack_name,
    action_name="cloudformation-create-update"
))

The Pipeline construct automatically defines an IAM Role for you in the target account which the pipeline will assume to perform that action. This Role will be defined in a support stack named <PipelineStackName>-support-<account>, that will automatically be deployed before the stack containing the pipeline.

If you do not want to use the generated role, you can also explicitly pass a role when creating the action. In that case, the action will operate in the account the role belongs to:

# Explicitly pass in a `role` when creating an action.
# stage: codepipeline.IStage
# template_path: codepipeline.ArtifactPath

stage.add_action(codepipeline_actions.CloudFormationCreateUpdateStackAction(
    template_path=template_path,
    admin_permissions=False,
    stack_name=Stack.of(self).stack_name,
    action_name="cloudformation-create-update",
    # ...
    role=iam.Role.from_role_arn(self, "ActionRole", "...")
))

Cross-region CodePipelines

Similar to how you set up a cross-account Action, the AWS resource object you pass to actions can also be in different Regions. For example, the following Action deploys to an imported S3 bucket from a different Region:

# Deploy to an imported S3 bucket from a different Region.
# stage: codepipeline.IStage
# input: codepipeline.Artifact

stage.add_action(codepipeline_actions.S3DeployAction(
    bucket=s3.Bucket.from_bucket_attributes(self, "Bucket",
        region="us-west-1"
    ),
    input=input,
    action_name="s3-deploy-action"
))

Actions that don't take an AWS resource will accept an explicit region parameter:

# Actions that don't take an AWS resource will accept an explicit `region` parameter.
# stage: codepipeline.IStage
# template_path: codepipeline.ArtifactPath

stage.add_action(codepipeline_actions.CloudFormationCreateUpdateStackAction(
    template_path=template_path,
    admin_permissions=False,
    stack_name=Stack.of(self).stack_name,
    action_name="cloudformation-create-update",
    # ...
    region="us-west-1"
))

The Pipeline construct automatically defines a replication bucket for you in the target region, which the pipeline will replicate artifacts to and from. This Bucket will be defined in a support stack named <PipelineStackName>-support-<region>, that will automatically be deployed before the stack containing the pipeline.

If you don't want to use these support stacks, and already have buckets in place to serve as replication buckets, you can supply these at Pipeline definition time using the crossRegionReplicationBuckets parameter. Example:

# Supply replication buckets for the Pipeline instead of using the generated support stack
pipeline = codepipeline.Pipeline(self, "MyFirstPipeline",
    # ...

    cross_region_replication_buckets={
        # note that a physical name of the replication Bucket must be known at synthesis time
        "us-west-1": s3.Bucket.from_bucket_attributes(self, "UsWest1ReplicationBucket",
            bucket_name="my-us-west-1-replication-bucket",
            # optional KMS key
            encryption_key=kms.Key.from_key_arn(self, "UsWest1ReplicationKey", "arn:aws:kms:us-west-1:123456789012:key/1234-5678-9012")
        )
    }
)

See the AWS docs here for more information on cross-region CodePipelines.

Creating an encrypted replication bucket

If you're passing a replication bucket created in a different stack, like this:

# Passing a replication bucket created in a different stack.
app = App()
replication_stack = Stack(app, "ReplicationStack",
    env=Environment(
        region="us-west-1"
    )
)
key = kms.Key(replication_stack, "ReplicationKey")
replication_bucket = s3.Bucket(replication_stack, "ReplicationBucket",
    # like was said above - replication buckets need a set physical name
    bucket_name=PhysicalName.GENERATE_IF_NEEDED,
    encryption_key=key
)

# later...
codepipeline.Pipeline(replication_stack, "Pipeline",
    cross_region_replication_buckets={
        "us-west-1": replication_bucket
    }
)

When trying to encrypt it (and note that if any of the cross-region actions happen to be cross-account as well, the bucket has to be encrypted - otherwise the pipeline will fail at runtime), you cannot use a key directly - KMS keys don't have physical names, and so you can't reference them across environments.

In this case, you need to use an alias in place of the key when creating the bucket:

# Passing an encrypted replication bucket created in a different stack.
app = App()
replication_stack = Stack(app, "ReplicationStack",
    env=Environment(
        region="us-west-1"
    )
)
key = kms.Key(replication_stack, "ReplicationKey")
alias = kms.Alias(replication_stack, "ReplicationAlias",
    # aliasName is required
    alias_name=PhysicalName.GENERATE_IF_NEEDED,
    target_key=key
)
replication_bucket = s3.Bucket(replication_stack, "ReplicationBucket",
    bucket_name=PhysicalName.GENERATE_IF_NEEDED,
    encryption_key=alias
)

Variables

The library supports the CodePipeline Variables feature. Each action class that emits variables has a separate variables interface, accessed as a property of the action instance called variables. You instantiate the action class and assign it to a local variable; when you want to use a variable in the configuration of a different action, you access the appropriate property of the interface returned from variables, which represents a single variable. Example:

# MyAction is some action type that produces variables, like EcrSourceAction
my_action = MyAction(
    # ...
    action_name="myAction"
)
OtherAction(
    # ...
    config=my_action.variables.my_variable,
    action_name="otherAction"
)

The namespace name that will be used will be automatically generated by the pipeline construct, based on the stage and action name; you can pass a custom name when creating the action instance:

# MyAction is some action type that produces variables, like EcrSourceAction
my_action = MyAction(
    # ...
    variables_namespace="MyNamespace",
    action_name="myAction"
)

There are also global variables available, not tied to any action; these are accessed through static properties of the GlobalVariables class:

# OtherAction is some action type that produces variables, like EcrSourceAction
OtherAction(
    # ...
    config=codepipeline.GlobalVariables.execution_id,
    action_name="otherAction"
)

Check the documentation of the @aws-cdk/aws-codepipeline-actions for details on how to use the variables for each action class.

See the CodePipeline documentation for more details on how to use the variables feature.

Events

Using a pipeline as an event target

A pipeline can be used as a target for a CloudWatch event rule:

# A pipeline being used as a target for a CloudWatch event rule.
import aws_cdk.aws_events_targets as targets
import aws_cdk.aws_events as events

# pipeline: codepipeline.Pipeline


# kick off the pipeline every day
rule = events.Rule(self, "Daily",
    schedule=events.Schedule.rate(Duration.days(1))
)
rule.add_target(targets.CodePipeline(pipeline))

When a pipeline is used as an event target, the "codepipeline:StartPipelineExecution" permission is granted to the AWS CloudWatch Events service.

Event sources

Pipelines emit CloudWatch events. To define event rules for events emitted by the pipeline, stages or action, use the onXxx methods on the respective construct:

# Define event rules for events emitted by the pipeline
import aws_cdk.aws_events as events

# my_pipeline: codepipeline.Pipeline
# my_stage: codepipeline.IStage
# my_action: codepipeline.Action
# target: events.IRuleTarget

my_pipeline.on_state_change("MyPipelineStateChange", target=target)
my_stage.on_state_change("MyStageStateChange", target)
my_action.on_state_change("MyActionStateChange", target)

CodeStar Notifications

To define CodeStar Notification rules for Pipelines, use one of the notifyOnXxx() methods. They are very similar to onXxx() methods for CloudWatch events:

# Define CodeStar Notification rules for Pipelines
import aws_cdk.aws_chatbot as chatbot

# pipeline: codepipeline.Pipeline

target = chatbot.SlackChannelConfiguration(self, "MySlackChannel",
    slack_channel_configuration_name="YOUR_CHANNEL_NAME",
    slack_workspace_id="YOUR_SLACK_WORKSPACE_ID",
    slack_channel_id="YOUR_SLACK_CHANNEL_ID"
)
rule = pipeline.notify_on_execution_state_change("NotifyOnExecutionStateChange", target)

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws-cdk.aws-codepipeline-1.204.0.tar.gz (279.9 kB view details)

Uploaded Source

Built Distribution

aws_cdk.aws_codepipeline-1.204.0-py3-none-any.whl (278.8 kB view details)

Uploaded Python 3

File details

Details for the file aws-cdk.aws-codepipeline-1.204.0.tar.gz.

File metadata

File hashes

Hashes for aws-cdk.aws-codepipeline-1.204.0.tar.gz
Algorithm Hash digest
SHA256 a339b4292df8d4353cb6f1c8c41013a3700af8d3b25447c8ada71300f6a47930
MD5 098a38eca3f92df047e624383b9e73bd
BLAKE2b-256 1275917e230e906d5c42a8548416a7ca9796a34421028be915d43ad4eb0f3e07

See more details on using hashes here.

File details

Details for the file aws_cdk.aws_codepipeline-1.204.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aws_cdk.aws_codepipeline-1.204.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f860581aea3464fda9b183c80a1cc79c49a6f3a26dc5b0894e1061f30cf0e099
MD5 be4af79a9d138e1e5ad4eaaae284aeb0
BLAKE2b-256 a8eeb771b28b573974192ae9a1f4c54d2b5e0ab29242933e127cf8a9563033bc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page