Skip to main content

Create a Swarm Cluster on Digital Ocean using Terraform Wrapped by Python

Project description

Swarm Terraform using Python

This is a python package that wraps the terraform necessary to create a Swarm Cluster at Digital Ocean. You do not need to know the terraform configuration language (a.k.a HCL), just Python.

However, you'll rely the creation of the resources to Terraform. The best of the two worlds :)

Get Started

1 - Add to your project the Swarm_TF:

pip install swarm_tf=0.1.0

2 - Create your Cluster:

import sys
from terraobject import Terraobject
from swarm_tf.workers import WorkerVariables
from swarm_tf.workers import Worker
from swarm_tf.managers import ManagerVariables
from terrascript import provider, function, output
from terrascript.digitalocean.d import digitalocean_ssh_key as data_digitalocean_ssh_key
from swarm_tf.managers import Manager
from swarm_tf.common import VolumeClaim, get_user_data_script
from terrascript.digitalocean.r import *

# Setup
do_token = "DIGITAL OCEAN TOKEN"

# Common
domain = "swarm.example.com"
region = "nyc3"
ssh_key = "~/.ssh/id_rsa"
user_data = get_user_data_script()

o = Terraobject()

o.terrascript.add(provider("digitalocean", token=do_token))

# ---------------------------------------------
# Get Existing Object at Digital Ocean
# ---------------------------------------------
sshkey = data_digitalocean_ssh_key("mysshkey", name="id_rsa")
o.terrascript.add(sshkey)
o.shared['sshkey'] = sshkey

# ---------------------------------------------
# Creating Tags
# ---------------------------------------------
cluster_tag = digitalocean_tag("cluster", name="cluster")
manager_tag = digitalocean_tag("manager", name="manager")
worker_tag = digitalocean_tag("worker", name="worker")
o.terrascript.add(cluster_tag)
o.terrascript.add(manager_tag)
o.terrascript.add(worker_tag)

# ---------------------------------------------
# Creating Swarm Manager
# ---------------------------------------------
managerVar = ManagerVariables()
managerVar.image = "ubuntu-18-04-x64"
managerVar.size = "s-1vcpu-1gb"
managerVar.name = "manager"
managerVar.region = region
managerVar.domain = domain
managerVar.total_instances = 1
managerVar.user_data = user_data
managerVar.tags = [cluster_tag.id, manager_tag.id]
managerVar.remote_api_ca = None
managerVar.remote_api_key = None
managerVar.remote_api_certificate = None
managerVar.ssh_keys = [sshkey.id]
managerVar.provision_ssh_key = ssh_key
managerVar.provision_user = "root"
managerVar.connection_timeout = "2m"

manager = Manager(o, managerVar)
manager.create_managers()

# ---------------------------------------------
# Creating Worker Nodes
# ---------------------------------------------
workerVar = WorkerVariables()
workerVar.image = "ubuntu-18-04-x64"
workerVar.size = "s-1vcpu-1gb"
workerVar.name = "worker"
workerVar.region = region
workerVar.domain = domain
workerVar.total_instances = 2
workerVar.user_data = user_data
workerVar.tags = [cluster_tag.id, worker_tag.id]
workerVar.manager_private_ip = o.shared["manager_nodes"][0].ipv4_address_private
workerVar.join_token = function.lookup(o.shared["swarm_tokens"].result, "worker", "")
workerVar.ssh_keys = [sshkey.id]
workerVar.provision_ssh_key = ssh_key
workerVar.provision_user = "root"
workerVar.persistent_volumes = None
workerVar.connection_timeout = "2m"

worker = Worker(o, workerVar)
worker.create_workers()

# ---------------------------------------------
# Creating Persistent Nodes
# ---------------------------------------------
workerVar.name = "persistent"
workerVar.persistent_volumes = [VolumeClaim(o, region, "volume-nyc3-01")]
workerVar.total_instances = 1
persistent_worker = Worker(o, workerVar)
persistent_worker.create_workers()


# ---------------------------------------------
# Outputs
# ---------------------------------------------
o.terrascript.add(output("manager_ips",
                         value=[value.ipv4_address for value in o.shared["manager_nodes"]],
                         description="The manager nodes public ipv4 addresses"))

o.terrascript.add(output("manager_ips_private",
                         value=[value.ipv4_address_private for value in o.shared["manager_nodes"]],
                         description="The manager nodes private ipv4 addresses"))

o.terrascript.add(output("worker_ips",
                         value=[value.ipv4_address for value in o.shared["worker_nodes"]],
                         description="The worker nodes public ipv4 addresses"))

o.terrascript.add(output("worker_ips_private",
                         value=[value.ipv4_address_private for value in o.shared["worker_nodes"]],
                         description="The worker nodes private ipv4 addresses"))

o.terrascript.add(output("manager_token",
                         value=function.lookup(o.shared["swarm_tokens"].result, "manager", ""),
                         description="The Docker Swarm manager join token",
                         sensitive=True))

o.terrascript.add(output("worker_token",
                         value=function.lookup(o.shared["swarm_tokens"].result, "worker", ""),
                         description="The Docker Swarm worker join token",
                         sensitive=True))

o.terrascript.add(output("worker_ids",
                         value=[value.id for value in o.shared["worker_nodes"]]))

o.terrascript.add(output("manager_ids",
                         value=[value.id for value in o.shared["manager_nodes"]]))

if len(sys.argv) == 2 and sys.argv[1] == "label":
    for obj in o.shared["__variables"]:
        for i in range(1, obj["instances"]+1):
            print("docker node update --label-add type={0} {0}_{1:02d}".format(obj["type"], i))
else:
    print(o.terrascript.dump())

Volumes

It is possible to use the VolumeClaim class to attach an existent or create a new volume to a droplet. This volume will be mounted in the host folder /data. So you can deploy your stack or service an map to this volume.

Terraform Plan & Apply

Instead to run terraform directly you can use the terrascript wrapper that will run the python, save the terraform json and then execute the terraform action you want.

For example, to run the terraform plan you can use this:

terrascript plan -out my.tfplan

and for apply you can use:

terrascript apply "my.tfplan"

Note: Your main script need to named as main.py and need to be in the folder your running the terrascript

Deploying Services and Stacks

You can only execute the Deploy on the machine. We provided a script to connect to the Manager, so this way you can deploy your stacks and services from you local machine. Execute these commands:

connect_to_manager -c
export DOCKER_HOST=tcp://localhost:2377

To disconnect just execute:

connect_to_manager -d
unset DOCKER_HOST

References:

swarm_tf uses the python_terrascript code. Refer to the project link to get more information about it:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

swarm_tf-0.2.0-py3-none-any.whl (13.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page