Skip to main content

AWS Lambda function/layer version deployment manager.

Project description

Reviser

PyPI version build status coverage report Code style: black Code style: flake8 Code style: mypy PyPI - License

Reviser is a tool for AWS Lambda function and layer version deployment and alias management specifically for Python runtimes where the actual infrastructure is managed separately, mostly likely by CloudFormation or Terraform. There are a number of ways to manage AWS Lambda functions and layers already, but their generality and all-encompassing approaches don't integrate well with certain workflows and can be overly complex for many needs.

Reviser is scoped to facilitate the deployment and updating of AWS Lambda Python functions and layers for all version-specific configurations, e.g. code bundles, environment variables, memory size, and timeout lengths. The expectation is that functions are created by other means and then configuration for versions is managed with the reviser through an interactive or scripted shell of commands.

Basic Usage

A project defines one or more lambda function configuration targets in a lambda.yaml file in the root project directory. The most basic configuration looks like this:

bucket: name-of-s3-bucket-for-code-uploads
targets:
- kind: function
  name: foo-function

This configuration defines a single foo-function lambda function target that will be managed by reviser. The expectation is that this function exists and was created by another means, e.g. CloudFormation or Terraform. A bucket must be specified to indicate where the zipped code bundles will be uploaded prior to them being applied to the target(s). The bucket must already exist as well.

By default the package will include no external, e.g. pip, package dependencies. It will search for the first folder in the directory where the lambda.yaml file is located that contains an __init__.py file, identifying that folder as a Python source package for the function. It will also look for a lambda_function.py alongside the lambda.yaml file to serve as the entrypoint. These will be included in the uploaded and deployed code bundle when a push or a deploy command is executed. These default settings can all be configured along with many more as will be outlined below.

To deploy this example project, install the reviser python library and start the shell with the command reviser in your terminal of choice in the directory where the lambda.yaml file resides. Docker must be running and available in the terminal in which you execute this command, as reviser is a containerized shell environment that runs within a container that mimics the actual AWS Lambda runtime environment. Then run the push command within the launched shell to create and upload the bundled source code and publish a new version of the foo-function lambda function with the uploaded results.

Shell commands

The reviser command starts an interactive shell within a Docker container compatible with the AWS Python Lambda runtime. This shell contains various commands for deploying and managing deployments of lambda functions and layers defined in a project's lambda.yaml configuration file, the format of which is described later in this document. The shell commands are:

alias

Assign an alias to the specified version of the selected or specified lambda function.

usage: alias [--function FUNCTION] [--yes] [--create] alias version

positional arguments:
  alias                Name of an existing alias to move to the specified
                       version, or the name of an alias to create and assign
                       to the specified function version if the --create flag
                       is included to allow for creating a new alias.
  version              Version of the function that the alias should be
                       assigned to. This will either be an integer value or
                       $LATEST. To see what versions are available for a given
                       function use the list command.

optional arguments:
  --function FUNCTION  The alias command only acts on one function. This can
                       be achieved either by selecting the function target via
                       the select command, or specifying the function name to
                       apply this change to with this flag.
  --yes                By default this command will require input confirmation
                       before carrying out the change. Specify this flag to
                       skip input confirmation and proceed without a breaking
                       prompt.
  --create             When specified the alias will be created instead of
                       reassigned. Use this to create and assign new aliases
                       to a function. When this flag is not specified, the
                       command will fail if the alias doesn't exist, which
                       helps prevent accidental alias creation.

Or it will create a new alias and assign it to the specified version if the --create flag is included. To assign an existing test alias to version 42 of the selected function, the command would be:

> alias test 42

If multiple functions are currently selected, use --function=<NAME> to identify the function to which the alias change will be applied.

bundle

Install dependencies and copies includes into a zipped file ready for deployment.

usage: bundle [--reinstall] [--output OUTPUT]

optional arguments:
  --reinstall           Add this flag to reinstall dependencies on a repeated
                        bundle operation. By default, dependencies will remain
                        cached for the lifetime of the shell to speed up the
                        bundling process. This will force dependencies to be
                        installed even if they had been installed previously.
  --output OUTPUT, -o OUTPUT
                        Output the bundled artifacts into the specified output
                        path.

The resulting zip file is structured correctly to be deployed to the lambda function/layer target via an S3 upload and subsequent publish command.

configs

Display the configs merged from its source file, dynamic values and defaults.

usage: configs

Use this to inspect and validate that the loaded configuration meets expectations when parsed into the reviser shell.

deploy

Upload the bundled contents to the upload S3 bucket and then publish a new version.

usage: deploy [--description DESCRIPTION] [--dry-run]

optional arguments:
  --description DESCRIPTION
                        Specify a message to assign to the version published
                        by the deploy command.
  --dry-run             If set, the deploy operation will be exercised without
                        actually carrying out the actions. This can be useful
                        to validate the deploy process without side effects.

This will be carried out for each of the lambda targets with that new bundle and any modified settings between the current configuration and that target's existing configuration. This command will fail if a target being deployed has not already been bundled.

exit

Exit the shell and returns to the parent terminal.

usage: exit

help (?)

Display help information on the commands available within the shell.

usage: help

Additional help on each command can be found using the --help flag on the command in question.

list

List versions of the specified lambda targets with info about each version.

usage: list

prune

Remove old function and/or layer versions for the selected targets.

usage: prune [--start START] [--end END] [--dry-run] [-y]

optional arguments:
  --start START  Keep versions lower (earlier/before) this one. A negative
                 value can be specified for relative indexing in the same
                 fashion as Python lists.
  --end END      Do not prune versions higher than this value. A negative
                 value can be specified for relative indexing in the same
                 fashion as Python lists.
  --dry-run      Echo pruning operation without actually executing it.
  -y, --yes      Run the prune process without reviewing first.

push

Combined single command for bundling and deploying the selected targets.

usage: push [--reinstall] [--output OUTPUT] [--description DESCRIPTION]
            [--dry-run]

optional arguments:
  --reinstall           Add this flag to reinstall dependencies on a repeated
                        bundle operation. By default, dependencies will remain
                        cached for the lifetime of the shell to speed up the
                        bundling process. This will force dependencies to be
                        installed even if they had been installed previously.
  --output OUTPUT, -o OUTPUT
                        Output the bundled artifacts into the specified output
                        path.
  --description DESCRIPTION
                        Specify a message to assign to the version published
                        by the deploy command.
  --dry-run             If set, the deploy operation will be exercised without
                        actually carrying out the actions. This can be useful
                        to validate the deploy process without side effects.

region

Switch the target region.

usage: region
              [{us-east-2,us-east-1,us-west-1,us-west-2,af-south-1,ap-east-1,ap-south-1,ap-northeast-3,ap-northeast-2,ap-southeast-1,ap-southeast-2,ap-northeast-1,ca-central-1,cn-north-1,cn-northwest-1,eu-central-1,eu-west-1,eu-west-2,eu-south-1,eu-west-3,eu-north-1,me-south-1,sa-east-1,us-gov-east-1,us-gov-west-1}]

positional arguments:
  {us-east-2,us-east-1,us-west-1,us-west-2,af-south-1,ap-east-1,ap-south-1,ap-northeast-3,ap-northeast-2,ap-southeast-1,ap-southeast-2,ap-northeast-1,ca-central-1,cn-north-1,cn-northwest-1,eu-central-1,eu-west-1,eu-west-2,eu-south-1,eu-west-3,eu-north-1,me-south-1,sa-east-1,us-gov-east-1,us-gov-west-1}
                        AWS region name for the override. Leave it blank to
                        return to the default region for the initially loaded
                        credentials and/or environment variables.

reload

Reload the lambda.yaml configuration file from disk.

usage: reload

select

Allow for selecting subsets of the targets within the loaded configuration.

usage: select [--functions] [--layers] [--exact] [name ...]

positional arguments:
  name                  Specifies the value to match against the function and
                        layer target names available from the configuration.
                        This can include shell-style wildcards and will also
                        match against partial strings. If the --exact flag is
                        specified, this value must exactly match one of the
                        targets instead of the default fuzzy matching
                        behavior.

optional arguments:
  --functions, --function, --func, -f
                        When specified, functions will be selected. This will
                        default to true if neither of --functions or --layers
                        is specified. Will default to false if --layers is
                        specified.
  --layers, --layer, -l
                        When specified, layers will be selected. This will
                        default to true if neither of --functions or --layers
                        is specified. Will default to false if --functions is
                        specified.
  --exact               Forces the match to be exact instead of fuzzy.

The subsets are fuzzy-matched unless the --exact flag is used.

shell

Macro command to convert to interactive shell operation.

usage: shell

This is a special command to use in run command groups/macros to start interactive command mode for the terminal. Useful when in scenarios where you wish to prefix an interactive session with commonly executed commands. For example, if you want to select certain targets with the select command as part of starting the shell, you could create a run command group/macro in your lambda.yaml that executes the select command and then executes the shell command. This would updated the selection and then with the shell command, start the shell in interactive mode. Without specifying the shell command here, the run command group/macro would just set a selection and then exit.

status

Show the current status information for each of the selected lambda targets.

usage: status [qualifier]

positional arguments:
  qualifier  Specifies a version or alias to show status for. If not
             specified, $LATEST will be used for functions and the latest
             version will be dynamically determined for layers.

tail

Tail the logs for the selected lambda functions.

usage: tail

More detail on any of these commands can be found from within the shell by executing them with the --help flag.

The reviser application also supports non-interactive batch command execution via run macros that behave similarly to how npm run <command> commands are defined. For more details see the run attribute section of the configuration file definitions below.

Configuration Files

Configuration files, named lambda.yaml define the lambda targets to be managed within a project. The top-level keys in the configuration file are:

bucket(s)

This key defines the bucket or buckets where zipped source bundles will be uploaded before they are deployed to their lambda function and/or layer targets. Basic usage is to specify the bucket as a key:

bucket: bucket-name

It's also possible for multi-account scenarios to specify multiple buckets as a key-value pairing where the keys are the AWS account IDs (as strings) and the values are the bucket names associated with those IDs. In this case the bucket selection is made dynamically based on the AWS session loaded during shell initialization. Specifying multiple buckets looks like:

buckets:
  "123456789": bucket-in-account-123456789
  "987654321": bucket-in-account-987654321

Multiple region buckets can also be specified using the AWS region as the key:

buckets:
  us-east-1: bucket-in-region-us-east-1
  us-west-2: bucket-in-region-us-west-2

These can be combined to define buckets for multiple accounts and multiple regions as:

buckets:
  "123456789":
    us-east-1: bucket-123456789-in-region-us-east-1
    us-west-2: bucket-123456789-in-region-us-west-2
  "987654321":
    us-east-1: bucket-987654321-in-region-us-east-1
    us-west-2: bucket-987654321-in-region-us-west-2

AWS region

The AWS region in which the resources reside can be specified at the top level of the file if desired. It is recommended that the region be specified within the calling AWS profile if possible for flexibility, but there are situations where it makes more sense to make it explicit within the configuration file instead. If no region is found either in the configuration file or in the AWS profile the us-east-1 region will be used as the default in keeping with AWS region defaulting conventions. Specify the region with the top-level key:

region: us-east-2

targets

Targets is where the bulk of the configuration resides. Each item is either of the function or layer kind and has associated configuration and bundling settings according to the type. Common to both function and layer kinds are the keys:

targets[N].kind

As mentioned already, each target must specify its object type using the kind key:

targets:
- kind: function
  ...
- kind: layer
  ...

targets[N].name(s)

The name specifies the name of the target object, not the ARN. For example, a function named foo would be represented as:

targets:```
- kind: function
  name: foo

A single target can point to multiple functions. This is useful in cases where a single target could be for both development and production functions or where a single code-base is shared across multiple functions for logical or integration reasons. In this case a list of names is supplied instead:

targets:
- kind: function
  names:
  - foo-devel
  - foo-prod

targets[N].region

In the same fashion as regions can be explicitly set as a top-level configuration key, they can also be set on a per-target basis. If set, the target region will take precedence over the top-level value and the profile-specified value. This makes deploying code across regions within a single configuration file possible.

targets[N].dependencies

Dependencies is a list of external dependency sources to install as site packages in the lambda function or layer. Multiple package managers are supported and specified by the kind attribute:

targets:
- kind: layer
  name: foo
  dependencies:
  - kind: pip
  - kind: pipper
  - kind: poetry

Currently pip, pipper and poetry package managers are supported. For any of the package managers, the dependencies can be specified explicitly with the package(s) key.

targets:
- kind: layer
  name: foo
  dependencies:
  - kind: pip
    packages:
    - spam
    - hamd
  - kind: pipper
    package: spammer

It's also possible to specify a file to where the package dependencies have been defined.

targets:
- kind: layer
  name: foo
  dependencies:
  - kind: pip
    file: requirements.layer.txt
  - kind: pipper
    file: pipper.layer.json

If no packages or file is specified, the default file for the given package manager will be used by default (e.g. requirements.txt for pip, pipper.json for pipper, and pyproject.toml for poetry).

It is also possible to specify the same kind of package manager multiple times in this list to aggregate dependencies from multiple locations.

targets[N].dependencies.skip

It is possible to specify inline dependencies to skip during the bundling installation process. This can be useful, for example, when a particular dependency is specific to platforms other than the lambda environment. Or perhaps a package like boto3 that is already available in the lambda function should be skipped to save bundling space while still wanting to include it in the packages dependencies for beyond-lambda deployment purposes.

As shown below, specify the packages to skip within the dependency as part of the dependency definition:

targets:
- kind: function
  name: foo
  dependencies:
  - kind: pip
    skip:
    - boto3

targets[N].dependencies.arguments

The arguments is an optional map of arguments which will be passed to the package manager during installation.

targets:
- kind: function
  name: foo
  dependencies:
  - kind: pip
    arguments:
      --arg1: val1

targets[N].dependencies(kind="pipper")

Pipper repositories have additional configuration not associated with pip packages. To support pipper libraries, there are two additional attributes that can be specified: bucket and prefix.

The bucket is required as it specifies the S3 bucket used as the package source and should be read-accessible by the profile invoking reviser.

The prefix is an optional alternate package prefix within the S3 bucket. Use this only if you are using an alternate prefix with for your pipper package.

targets:
- kind: layer
  name: foo
  dependencies:
  - kind: pipper
    file: pipper.layer.json
    bucket: bucket-name-where-pipper-package-resides
    prefix: a/prefix/that/is/not/just/pipper

targets[N].dependencies(kind="poetry")

Poetry repositories have additional extras configuration that can be used to specify optional dependency groups to install in the lambda. This can be useful to separate dependencies by function.

targets:
- kind: layer
  name: foo
  dependencies:
  - kind: poetry
    extras:
    - group

targets[N].bundle

The target bundle object contains the attributes that define the bundle that will be created and uploaded to the functions or layers in a given target as part of the deployment process. It's primary purpose is to define what files should be included in the bundling process, which it achieves with the following attributes.

targets[N].bundle.include(s)

The include(s) key is a string or list of Python glob-styled includes to add to the bundle. If no includes are specified, the default behavior is:

  • function targets: copy the first directory found that contains an init.py file.
  • layer targets: do not copy anything and assume dependencies are the only files to copy into the bundle.

All paths should be referenced relative to the root path where the lambda.yaml is located. For a recursive matching pattern, the glob syntax should be used as **/*.txt or if restricted to a folder inside of the root directory then folder/**/*.txt. To include the entire contents of a directory, specify the path to the folder.

targets:
- kind: function
  name: foo
  bundle:
    includes:
    # This is shorthand for "foo_library/**/*"
    - foo_library
    # All Python files in the "bin/" folder recursively.
    - bin/**/*.py
    # All Jinja2 files in the root directory that begin "template_".
    - template_*.jinja2

targets[N].bundle.exclude(s)

The exclude(s) key is an optional one that is also a string or list of Python glob-styled paths to remove from the matching include(s). These are applied to the files found via the includes and do not need to be comprehensive of all files in the root directory. Building on the example from above:

targets:
- kind: function
  name: foo
  bundle:
    includes:
    # This is shorthand for "foo_library/**/*"
    - foo_library
    # All Python files in the "bin/" folder recursively.
    - bin/**/*.py
    # All Jinja2 files in the root directory that begin "template_".
    - template_*.jinja2
    exclues:
    - template_local.jinja2
    - template_testing.jinja2

This example would remove two of the template file matches from the includes from the files copied into the bundle for deployment.

All __pycache__, *.pyc and .DS_Store files/directories are excluded from the copying process in all cases and do not need to be specified explicitly.

targets[N].bundle.exclude_package(s)

The package_exclude(s) key is an optional one that is also a string or list of Python glob-styled paths. However, these are for paths to exclude when adding site-packages to the bundle. Building on the example from above:

targets:
- kind: function
  name: foo
  bundle:
    includes:
    # This is shorthand for "foo_library/**/*"
    - foo_library
    # All Python files in the "bin/" folder recursively.
    - bin/**/*.py
    # All Jinja2 files in the root directory that begin "template_".
    - template_*.jinja2
    exclues:
    - template_local.jinja2
    - template_testing.jinja2
    package_excludes:
    - foo/foo_windows.py
  dependencies:
  - kind: pip

This example would not include the site-packages/foo/foo_windows.py from the bundled zip file for the lambda function. In this case, the reason for omitting this file is that "Windows" code isn't needed in a linux runtime, so you want to save some space. This is more likely useful for large packages that include unneeded components, and it is desirable to save the space. This should be used very carefully as it can cause external libraries to fail.

targets[N].bundle.omit_package(s)

There can be cases where dependencies install dependencies of their own that you may not want copied over to the bundle. The most common case is a dependency that requires boto3, which is available by default in lambda functions already. In that case it can be useful to list site packages that should not be copied into the bundle but may have been installed as a side effect of the dependency installation process.

targets:
- kind: function
  name: foo
  bundle:
    omit_package: boto3
  dependencies:
  - kind: pip
    # Installs a package that requires boto3, which is therefore installed
    # into the site-packages bundle directory as a result.
    # https://github.com/awslabs/aws-lambda-powertools-python
    package: aws-lambda-powertools

In the above example aws-lambda-powertools causes boto3 to be installed as well. However, since lambda functions have boto3 installed by default, it's possible to omit that package from the bundling process so that it isn't installed twice.

Note, however, that installing boto3 directly in a bundle can be beneficial because it gives you the ability to install the version that is compatible with your given source code and dependencies. The boto3 version on the lambda function can be aged and stale.

targets[N].bundle.handler

This attribute only applies to function targets and gives the location of file and function entrypoint for the lambda function(s) in the target. The format matches the expected value for lambda functions, which is <filename_without_extension>.<function_name>.

targets:
- kind: function
  name: foo
  bundle:
    handler: function:main

In this case the bundler would expect to find function.py in the top-leve directory alongside lambda.yaml and inside it there would be a main(event, context) function that would be called when the function(s) are invoked.

If this value is omitted, the default value of lambda_function.lambda_handler will be used as this matches the AWS lambda Python function documentation.

function targets

In addition to the common attributes described above that are shared between both function and layer targets, there are a number of additional attributes that apply only to function targets. These are:

(function) targets[N].image

Specifies the configuration of the image for image based lambda functions. This cannot be used with targets[N].bundle. With the exception of uri all subfields are optional.

image:
  uri: 123456789012.dkr.ecr.us-west-2.amazonaws.com/repo:tag
  entrypoint: /my/entrypoint
  cmd:
  - params
  - to
  - entrypoint
  workingdir: /the/working/dir

(function) targets[N].image.uri

The image uri for the function's image. This must be a ECR uri that resides within the same region as the lambda function. If the lambda function is deployed to a single region this can be configured with a string:

uri: 123456789012.dkr.ecr.us-west-2.amazonaws.com/repo:tag

If the lambda function is deployed to multiple regions it can be configured with a dictionary mapping region names to images.

uri:
  us-west-2: 123456789012.dkr.ecr.us-west-2.amazonaws.com/repo:tag
  us-east-2: 123456789012.dkr.ecr.us-east-2.amazonaws.com/repo:tag

(function) targets[N].image.entrypoint

A custom entrypoint to use for the image. If this is not specified the entrypoint of the image will be used. This can be specified as a list or as a single string that will be treated as a list with one element.

entrypoint: /my/entrypoint

or

entrypoint:
- /my/entrypoint

(function) targets[N].image.cmd

A custom command to use for the image. If this is not specified the default command of the image will be used. This can be specified as a list or as a single string that will be treated as a list with one element.

cmd: a_command

or

cmd:
- a_command
- with
- multiple
- words

(function) targets[N].image.workingdir

A custom working directory to set for the image. If this is not specified the default working directory of the image will be used.

workingdir: /my/working/dir

(function) targets[N].layer(s)

Specifies one or more layers that should be attached to the targeted function(s). Layers can be specified as fully-qualified ARNs for externally specified layers, e.g. a layer created in another AWS account, or by name for layers specified within the account and layers defined within the targets of the configuration file.

targets:
- kind: function
  name: foo
  layer: arn:aws:lambda:us-west-2:999999999:layer:bar
  ...

or for multiple layers:

targets:
- kind: function
  name: foo
  layers:
  # A layer defined in another account is specified by ARN.
  - arn:aws:lambda:us-west-2:999999999:layer:bar
  # A layer in this account is specified by name. This layer may also be
  # a target in this configuration file.
  - baz
  ...
- kind: layer
  name: baz
  ...

By default, deployments will use the latest available version of each layer, but this can be overridden by specifying the layer ARN with its version:

targets:
- kind: function
  name: foo
  layer: arn:aws:lambda:us-west-2:999999999:layer:bar:42
  ...

In the above example the layer will remain at version 42 until explicitly modified in the configuration file.

Layers can also be defined as objects instead of attributes. The two-layer example from above could be rewritten as:

targets:
- kind: function
  name: foo
  layers:
  - arn: arn:aws:lambda:us-west-2:999999999:layer:bar
  - name: baz
  ...

When specified as an object with attributes, there are a number of additional attributes that can be specified as well. First, version can be specified as a separate key from the arn or name, which in many cases can make it easier to work with than appending it to the end of the arn or function itself for programmatic/automation:

targets:
- kind: function
  name: foo
  layers:
  - arn: arn:aws:lambda:us-west-2:999999999:layer:bar
    version: 42
  - name: baz
    version: 123
  ...

Next is that the layer objects accept only and except keys that can be used to attach the layers to certain functions in the target and not others. This can be useful in cases where development and production targets share a lot in common, but perhaps point to different versions of a layer or perhaps separate development and production layers entirely. It can also be useful when a target of functions share a common codebase but don't all need the same dependencies. For performance optimization, restricting the layer inclusions only to those that need the additional dependencies can be beneficial.

The only and except attributes can be specified as a single string or a list of strings that match against unix pattern matching. For example, expanding on the example from above:

targets:
- kind: function
  names:
  - foo-devel
  - foo-devel-worker
  - foo-prod
  - foo-prod-worker
  layers:
  - name: baz-devel
    only: foo-devel*
  - name: baz-devel-worker
    only: foo-devel-worker
  - name: baz-prod
    only: foo-prod*
  - name: baz-prod-worker
    only: foo-prod-worker
  ...

this example shows 4 layers that are conditionally applied using the only keyword. The example could be rewritten with the except key instead:

targets:
- kind: function
  names:
  - foo-devel
  - foo-devel-worker
  - foo-prod
  - foo-prod-worker
  layers:
  - name: baz-devel
    except: foo-prod*
  - name: baz-devel-worker
    except:
    - foo-prod*
    - foo-devel
  - name: baz-prod
    except: foo-devel*
  - name: baz-prod-worker
    except:
    - foo-devel*
    - foo-prod
  ...

And either way works. The two (only and except) can also be combined when that makes more sense. For example, the baz-devel-worker from above could also be written as:

  - name: baz-devel-worker
    only: foo-devel*
    except: foo-devel

Note that if only is specified it is processed first and then except is removed from the matches found by only.

(function) targets[N].memory

This specifies the function memory in megabytes either as an integer or a string with an MB suffix.

targets:
- kind: function
  name: foo
  memory: 256MB

(function) targets[N].timeout

This specifies the function timeout in seconds either as an integer or a string with an s suffix.

targets:
- kind: function
  name: foo
  timeout: 42s

(function) targets[N].variable(s)

Variables contains a list of environment variables to assign to the function. They can be specified simply with as a string <KEY>=<value> syntax:

targets:
- kind: function
  name: foo
  variable: MODE=read-only

Here a single environment variable is specified that maps "MODE" to the value "ready-only". A more programmatic-friendly way is to specify the name and value as attributes of a variable:

targets:
- kind: function
  name: foo
  variables:
  - name: MODE
    value: read-only

Some environment variables may be managed through other means, e.g. terraform that created the function in the first place or another command interface used to update the function. For those cases, the preserve attribute should be set to true and no value specified.

targets:
- kind: function
  name: foo
  variables:
  - name: MODE
    preserve: true

In this case the MODE environment variable value will be preserved between function deployments to contain the value that was already set.

Finally, variables support the same only and exclude attributes that are found for target layers so that environment variables can be specified differently for subsets of targets.

The only and except attributes can be specified as a single string or a list of strings that match against unix pattern matching. For example, expanding on the example from above:

targets:
- kind: function
  names:
  - foo-prod
  - foo-devel
  variables:
  - name: MODE
    value: write
    only: '*prod'
  - name: MODE
    value: read-only
    except: '*prod'

(function) targets[N].ignore(s)

Ignores allows you to specify one or more configuration keys within a function target that should be ignored during deployments. For cases where any of the configuration values:

  • memory
  • timeout
  • variables

are managed by external systems, they can be specified by the ignores to prevent changes being applied by reviser.

targets:
- kind: function
  name: foo
  ignores:
  - memory
  - timeout

run

The run attribute contains an optional object of batch non-interactive commands to run when the shell is called with that run key. This is useful for orchestrating actions for CI/CD purposes as the commands will be processed within a shell environment without user prompts and then the shell will exit when complete without waiting for additional input.

run:
  deploy-prod:
  - select function *prod
  - push --description="($CI_COMMIT_SHORT_SHA): $CI_COMMIT_TITLE"
  - alias test -1
targets:
- kind: function
  names:
  - foo-prod
  - foo-devel

In the example above, the deploy-prod run command macro/group would start the shell and then non-interactively execute the three commands in order to first select the foo-prod function, then to build and deploy that function with a description created from CI environment variables and finally move the test alias to the newly deployed version using a negative version index of -1. After those three commands are executed reviser will exit the shell automatically, successfully ending that process.

There is also a special shell command that can be used in run command macros/groups that will start the shell in interactive mode. This is useful for using run command macros/groups for pre-configuration during startup of the interactive shell. Building on the previous example,

run:
  deploy-prod:
  - select function *prod
  - push --description="($CI_COMMIT_SHORT_SHA): $CI_COMMIT_TITLE"
  - alias test -1
  devel:
  - select * *devel
  - bundle
  - shell
targets:
- kind: function
  names:
  - foo-prod
  - foo-devel
- kind: layer
  names:
  - bar-devel
  - bar-prod

here we've added a devel run command macro/group that will select the devel function and layer and bundle those but not deploy them. After that's complete the shell command will kick off the interactive session and ask for user input. The benefit of this particular run command macro/group is to select the development targets and pre-build them to cache the dependencies for the shell user while they continue to develop and deploy the source code to the function.

Shared Dependencies

It is possible to share dependencies across targets. This is useful if the dependencies are the same but other configurations differ. The configuration will look something like this:

dependencies:
  # Each shared dependency must be named, but the name can be any valid yaml key that
  # you want.
  shared_by_my_foo_and_bar:
  - kind: pip
    file: requirements.functions.txt
  shared_by_others:
  - kind: pip
    file: requirements.layer.txt
    
targets:
- kind: function
  names:
  - foo-prod
  - foo-devel
  timeout: 30s
  memory: 256
  dependencies: shared_by_my_foo_and_bar

- kind: function
  names:
  - bar-prod
  - bar-devel
  timeout: 500s
  memory: 2048
  dependencies: shared_by_my_foo_and_bar

- kind: function
  names:
  - baz-prod
  - baz-devel
  timeout: 10s
  memory: 128
  dependencies: shared_by_others

- kind: layer
  names:
  - spam-prod
  - spam-devel
  dependencies: shared_by_others

Shared dependencies will be installed once reused by each target configured to use it. Each name shared dependency has the same structure and available options of a regular target dependencies definition.

Local Execution

When running reviser in your current environment instead of launching the shell within a new container, you will want to use the command reviser-shell. This is the local version of the CLI that is meant to be used within a suitable container environment that mimics the lambda runtime environment. It is merely a change in entrypoint, and has all the shell functionality described for the reviser command above.

Also, to run the reviser-shell successfully, you must install the extra shell dependencies with the installation:

$ pip install reviser[shell]

Without the shell extras install, the reviser-shell will fail. This is how you would use reviser in a containerized CI environment as well.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reviser-0.4.2.tar.gz (68.5 kB view details)

Uploaded Source

Built Distribution

reviser-0.4.2-py3-none-any.whl (80.5 kB view details)

Uploaded Python 3

File details

Details for the file reviser-0.4.2.tar.gz.

File metadata

  • Download URL: reviser-0.4.2.tar.gz
  • Upload date:
  • Size: 68.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.3 Linux/5.4.109+

File hashes

Hashes for reviser-0.4.2.tar.gz
Algorithm Hash digest
SHA256 23df81f42a45b7960ec686eef81a8ac595e7793c88d3a49df71f2fe010bce997
MD5 6f990b27b07b79ef55cd1c1fa40063e4
BLAKE2b-256 e1e48b89c1f2ef800577cf16f729753ff4b369e548d128d5568b7b067c8996ec

See more details on using hashes here.

File details

Details for the file reviser-0.4.2-py3-none-any.whl.

File metadata

  • Download URL: reviser-0.4.2-py3-none-any.whl
  • Upload date:
  • Size: 80.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.3 Linux/5.4.109+

File hashes

Hashes for reviser-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 94b88535cab9bd9481db1f0a6d4fbb4fd8176571af2c57ba749291bdb10d1f1b
MD5 686b51a549865f84d600570b921dc5fa
BLAKE2b-256 56dd6a701f018dabe184931fe847ab711590f1f6c01b2c5f6f7306d13e9755d1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page