Skip to main content

An orchestration tool for Terraform

Project description

terraform-worker

terraform-worker is a command line tool for pipelining terraform operations while sharing state between them. The worker consumese a yaml configuration file which is broken up into two sections, definitions (which were really just top level modules) and sub-modules. The definitions are put into a worker config in order, with the terraform variables, and remote state variables. Following is a sample configuration file and command:

./worker.yaml

terraform:
  providers:
    aws:
      vars:
        region: {{ aws_region }}
        version: "~> 2.61.0"

  # global level variables
  terraform_vars:
    region: {{ aws_region }}
    environment: dev

  definitions:
    # Either setup a VPC and resources, or deploy into an existing one
    network:
      path: /definitions/aws/network-existing

    database:
      path: /definitions/aws/rds
% worker --aws-profile default --backend s3 terraform example1

NOTE: When adding a provider from a non-hashicorp source, use a source field, as follows (the source field is only valid for terraform 13+ and is not emitted when using 12):

providers:
...
  kubectl:
    vars:
      version: "~> 1.9"
    source: "gavinbunney/kubectl"

In addition to using command line options, worker configuration can be specified using a worker_options section in the worker configuration.

terraform:
  worker_options:
    backend: s3
    backend_prefix: tfstate
    terraform_bin: /home/user/bin/terraform

  providers:
...

terraform-worker requires a configuration file. By default, it will looks for a file named "worker.yaml" in the current working directory. Together with the worker_options listed above, it's possible to specify all options either in the environment or in the configuration file and simply call the worker command by itself.

 % env | grep AWS
 AWS_ACCESS_KEY_ID=somekey
 AWS_SECRET_ACCESS_KEY=somesecret
 % head ./worker.yaml
terraform:
  worker_options:
    backend: s3
    backend_prefix: tfstate
    terraform_bin: /home/user/bin/terraform
 % worker terraform my-deploy

Development

 # virtualenv setup stuff... and then:
 % pip install poetry && make init

Releasing

Publishing a release to PYPI is done locally through poetry. Instructions on how to configure credentials for poetry can be found here.

Bump the version of the worker and commit the change:

 % poetry version <semver_version_number>

Build and publish the package to PYPI:

 % poetry publish --build

Configuration

A project is configured through a worker config, a yaml, json, or hcl2 file that specifies the definitions, inputs, outputs, providers and all other necessary configuration. The worker config is what specifies how state is shared among your definitions. The config support jinja templating that can be used to conditionally pass state or pass in env variables through the command line via the --config-var option.

./worker.yaml

terraform:
  providers:
    aws:
      vars:
        region: {{ aws_region }}
        version: "~> 2.61.1"

  # global level variables
  terraform_vars:
    region: {{ aws_region }}
    environment: dev

  definitions:
    # Either setup a VPC and resources, or deploy into an existing one
    network:
      path: /definitions/aws/network-existing

    database:
      path: /definitions/aws/rds
      remote_vars:
        subnet: network.outputs.subnet_id
{
    "terraform": {
        "providers": {
            "aws": {
                "vars": {
                    "region": "{{ aws_region }}",
                    "version": "~> 2.61"
                }
            }
        },
        "terraform_vars": {
            "region": "{{ aws_region }}",
            "environment": "dev"
        },
        "definitions": {
            "network": {
                "path": "/definitions/aws/network-existing"
            },
            "database": {
                "path": "/definitions/aws/rds",
                "remote_vars": {
                    "subnet": "network.outputs.subnet_id"
                }
            }
        }
    }
}
terraform {
  providers {
    aws = {
      vars = {
        region = "{{ aws_region }}"
        version = "2.63.0"
      }
    }
  }

  terraform_vars {
    environment = "dev"
    region = "{{ aws_region }}"
  }

  definitions {
    network = {
      path = "/definitions/aws/network-existing"
    }

    database = {
      path = "/definitions/aws/rds"

      remote_vars = {
        subnet = "network.outputs.subnet_id"
      }
    }
  }
}

In this config, the worker manages two separate terraform modules, a network and a database definition, and shares an output from the network definition with the database definition. This is made available inside of the database definition through the local.subnet value.

aws_region is substituted at runtime for the value of --aws-region passed through the command line.

Troubleshooting

Running the worker with the --no-clean option will keep around the terraform files that the worker generates. You can use these generated files to directly run terraform commands for that definition. This is useful for when you need to do things like troubleshoot or delete items from the remote state. After running the worker with --no-clean, cd into the definition directory where the terraform-worker generates it's tf files. The worker should tell you where it's putting these for example:

...
building deployment mfaitest
using temporary Directory: /tmp/tmpew44uopp
...

In order to troubleshoot this definition, you would cd /tmp/tmpew44uopp/definitions/my_definition/ and then perform any terraform commands from there.

Background

The terraform worker was a weekend project to run terraform against a series of definitions (modules). The idea was the configuration vars, provider configuration, remote state, and variables from remote state would all be dynamically generated. The purpose was for building kubernetes deployments, and allowing all of the configuration information to be stored as either yamnl files in github, or have the worker configuration generated by an API which stored all of the deployment configurations in a database.

Documentation

Documentation uses the Sphinx documentation fromework.

To build HTML documentation:

% cd docs
% make clean && make html

The documentation can be viewed locally by open ./docs/build/index.html in a browser.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

terraform-worker-0.10.3.tar.gz (36.6 kB view details)

Uploaded Source

Built Distribution

terraform_worker-0.10.3-py3-none-any.whl (53.3 kB view details)

Uploaded Python 3

File details

Details for the file terraform-worker-0.10.3.tar.gz.

File metadata

  • Download URL: terraform-worker-0.10.3.tar.gz
  • Upload date:
  • Size: 36.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.2 CPython/3.8.2 Darwin/20.4.0

File hashes

Hashes for terraform-worker-0.10.3.tar.gz
Algorithm Hash digest
SHA256 3389c5642b30a2acc98a04e428d1fd9fbfd153119557d0a32c0fd1141640a377
MD5 d9a959bdedbb69b5d6ee3fdddf6c264b
BLAKE2b-256 d19b875cfb9d75f05757cd22212b0909f921a69ea92d64ff13f70db542d51c81

See more details on using hashes here.

File details

Details for the file terraform_worker-0.10.3-py3-none-any.whl.

File metadata

File hashes

Hashes for terraform_worker-0.10.3-py3-none-any.whl
Algorithm Hash digest
SHA256 590b0cdac93c7133534b70d5442130ef1db581037454b83670b1778eed43fa90
MD5 b668504678723bb847bc5e9a0028dbce
BLAKE2b-256 82e72e75556e368977c76ac430de5905393b81bd12b42411b22cf4fb1dc56f45

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page