Application deployment on CoreOS clusters using fleetd and Consul
Houston installs as a command-line application and is meant to be used for automated deployment of Dockerized application stacks.
Houston deployments allow for files to be placed onto the host OS, the deployment of dependency containers, confirmed startup of a container using Consul, and teardown of previous container versions in a single run.
Houston may be installed via the Python package index with the tool of your choice:
pip install houston
Documentation is available on ReadTheDocs.
There is also an example configuration directory.
- Global deployments place a single list of units intended to be shared across all or a majority of CoreOS instances.
- Standalone deployments are like the global deployment but allows for more targeted deployments with file archives deployed first.
- Service deployments allow for the deployment of a single unit and the shared units that it is dependent upon
Example of deploying a full stack application:
$ houston -c config -e test-us-east-1 example 7b7d061b INFO Deploying firstname.lastname@example.org INFO Deploying email@example.com INFO Deploying firstname.lastname@example.org INFO Deploying email@example.com INFO Deploying firstname.lastname@example.org INFO Deploying email@example.com INFO Deploying firstname.lastname@example.org INFO Deploying email@example.com INFO firstname.lastname@example.org has started INFO Validated service is running with Consul INFO Destroying email@example.com INFO Deployment of example 7b7d061b and its dependencies successful. INFO Eagle, looking great. You're Go.
When executed, houston creates a tarball of files from the service’s file manifest and uploads it to Consul’s KV database. It then deploys a dynamically created systemd unit to fleet, which pulls the tarball from Consul and extracts the files to the CoreOS filesystem.
In the next step, it iterates through the dependency containers specified in the manifest, submitting and starting each unit, waiting until a unit is listed as active in systemd for all nodes, and then moves on to the next.
One the dependency containers have started, it starts the example service, waiting for systemd to report it as active. It then queries Consul for the version of the service that has started, ensuring that it is running on all the expected nodes that fleet says it has deployed it to.
Once a deployment has been confirmed, it looks at all units submitted to fleet, checking to see if there are other versions of containers running than what it deployed. If so, it will destroy those other containers with fleet.
Finally it will check to see if any other file archive versions exist in Consul’s for the service, removing them if so.
One of the more interesting parts for managing stack deployment is the namespacing of the shared stack elements in fleet, so that updating one stack does not impact another. For example, in the configuration, a service may be referred to as only pgbouncer:f20fb494, but when deployed it will be prefixed and versioned appropriately as example-pgbouncer@f20fb494 if the service name is example.
|||Global file deployments happen after the unit files are deployed so that Consul can be up and running prior to the placement of the global files.|
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for houston-0.3.0-py2.py3-none-any.whl