Skip to main content

An orchestration tool for Terraform

Project description

terraform-worker

terraform-worker is a command line tool for pipelining terraform operations while sharing state between them. The worker consumese a yaml configuration file which is broken up into two sections, definitions (which were really just top level modules) and sub-modules. The definitions are put into a worker config in order, with the terraform variables, and remote state variables. Following is a sample configuration file and command:

./worker.yaml

terraform:
  providers:
    aws:
      vars:
        region: //aws-region//
        version: "~> 2.61"

  # global level variables
  terraform_vars:
    region: //aws-region//
    environment: dev

  definitions:
    # Either setup a VPC and resources, or deploy into an existing one
    network:
      path: /definitions/aws/network-existing

    database:
      path: /definitions/aws/rds
% worker --aws-profile default --backend s3 terraform --show-output example1

NOTE: When adding a provider from a non-hashicorp source, use a source field, as follows (the source field is only valid for terraform 13+ and is not emitted when using 12):

providers:
...
  kubectl:
    vars:
      version: "~> 1.9"
    source: "gavinbunney/kubectl"

Development

 # virtualenv setup stuff... and then:
 % pip install poetry && make init

Releasing

Publishing a release to PYPI is done locally through poetry. Instructions on how to configure credentials for poetry can be found here.

Bump the version of the worker and commit the change:

 % poetry version <semver_version_number>

Build and publish the package to PYPI:

 % poetry publish --build

Configuration

A project is configured through a worker config, a yaml file that specifies the definitions, inputs, outputs, providers and all other necessary configuration. The worker config is what specifies how state is shared among your definitions. The config support jinja templating that can be used to conditionally pass state or pass in env variables through the command line via the --config-var option.

./worker.yaml

terraform:
  providers:
    aws:
      vars:
        region: {{ aws-region }}
        version: "~> 2.61"

  # global level variables
  terraform_vars:
    region: {{ aws-region }}
    environment: dev

  definitions:
    # Either setup a VPC and resources, or deploy into an existing one
    network:
      path: /definitions/aws/network-existing

    database:
      path: /definitions/aws/rds
      remote_vars:
        subnet: network.outputs.subnet_id

In this config, the worker manages two separate terraform modules, a network and a database definition, and shares an output from the network definition with the database definition. This is made available inside of the database definition through the local.subnet value.

aws-region is substituted at runtime for the value of --aws-region passed through the command line.

Troubleshooting

Running the worker with the --no-clean option will keep around the terraform files that the worker generates. You can use these generated files to directly run terraform commands for that definition. This is useful for when you need to do things like troubleshoot or delete items from the remote state. After running the worker with --no-clean, cd into the definition directory where the terraform-worker generates it's tf files. The worker should tell you where it's putting these for example:

...
building deployment mfaitest
using temporary Directory: /tmp/tmpew44uopp
...

In order to troubleshoot this definition, you would cd /tmp/tmpew44uopp/definitions/my_definition/ and then perform any terraform commands from there.

Background

The terraform worker was a weekend project to run terraform against a series of definitions (modules). The idea was the configuration vars, provider configuration, remote state, and variables from remote state would all be dynamically generated. The purpose was for building kubernetes deployments, and allowing all of the configuration information to be stored as either yamnl files in github, or have the worker configuration generated by an API which stored all of the deployment configurations in a database.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

terraform-worker-0.10.1.tar.gz (32.4 kB view details)

Uploaded Source

Built Distribution

terraform_worker-0.10.1-py3-none-any.whl (49.4 kB view details)

Uploaded Python 3

File details

Details for the file terraform-worker-0.10.1.tar.gz.

File metadata

  • Download URL: terraform-worker-0.10.1.tar.gz
  • Upload date:
  • Size: 32.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.2 CPython/3.8.2 Darwin/20.2.0

File hashes

Hashes for terraform-worker-0.10.1.tar.gz
Algorithm Hash digest
SHA256 4a6ac5cff8d58fc562d06382a1bf427794d6a4d6cb4e5f4077246af5b1552527
MD5 5f78693d39479e105833cf41c56e3066
BLAKE2b-256 5cee4812dd07ed7c788d9039aabbf29c5c7b70c29c1ec61f6ae5ea69a058ef56

See more details on using hashes here.

File details

Details for the file terraform_worker-0.10.1-py3-none-any.whl.

File metadata

File hashes

Hashes for terraform_worker-0.10.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1f1713a4b07da2a912b36df0ac29c76cb73af2364a777f7a6d3f10b8451e78eb
MD5 14519630e7a62315238b6cd4ee0ffe01
BLAKE2b-256 5e313e2291850e24976b3e7205a362e5a0e80c2e12e8689ca8e4e506ef376a88

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page