Skip to main content

A Docker Swarm Deployment Manager

Project description

Ṣeto

Ṣeto is a command-line tool designed to assist with setting up and managing shared storage volumes using NFS or GlusterFS drivers. It simplifies the process of configuring stack-based deployments, setting up manager and replica nodes, creating and syncing shared volumes, and mounting and unmounting these volumes.

Features

  • Compose Command: Resolves Docker Compose files.
  • Setup Command: Sets up manager and replica nodes.
  • Create Volumes Command: Creates and syncs shared volumes across nodes.
  • Mount Volumes Command: Mounts shared volumes on specified nodes.
  • Unmount Volumes Command: Unmounts shared volumes from specified nodes.

Usage

The main entry point for Ṣeto is the seto command. Below is a detailed description of each subcommand and its options.

Global Options

These options are applicable to all subcommands:

  • --stack: Required. Specifies the stack name.
  • --driver: Required. Specifies the driver URI to use. Can be nfs://username:password@hostname or gluster://username:password@hostname.

Subcommands

1. Compose Command

Resolves Docker Compose files.

seto --stack <stack-name> --driver <driver-uri> compose

Example:

seto --stack my-stack --driver nfs://user:pass@host compose
2. Setup Command

Sets up the manager and replica nodes.

seto --stack <stack-name> --driver <driver-uri> setup --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes to set up in the format username:password@hostname.

Example:

seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
3. Create Volumes Command

Creates and syncs shared volumes across nodes.

seto --stack <stack-name> --driver <driver-uri> create-volumes --replica <replica-connection-strings> [--force]
  • --replica: Required. Specifies the nodes where volumes will be created.
  • --force: Optional. Forces volume data synchronization.

Example:

seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
4. Mount Volumes Command

Mounts shared volumes on specified nodes.

seto --stack <stack-name> --driver <driver-uri> mount-volumes --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes where volumes will be mounted.

Example:

seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
5. Unmount Volumes Command

Unmounts shared volumes from specified nodes.

seto --stack <stack-name> --driver <driver-uri> unmount-volumes --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes where volumes will be unmounted.

Example:

seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2

Example Workflow

  1. Setup Manager and Replica Nodes
seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
  1. Create Volumes
seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
  1. Mount Volumes
seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
  1. Unmount Volumes
seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2
  1. Deploy Stack
seto --stack my-stack --manager nfs://user@manager-host deploy

Error Handling

The tool includes basic error handling to catch and report errors related to argument parsing and execution. If an error occurs, a message will be printed, and the tool will exit with a non-zero status code.

Environment Setup

  1. See cloud-init.yaml file for prerequisites to install.

  2. Install Devbox

  3. Install direnv with your OS package manager

  4. Hook it direnv into your shell

  5. Load environment

    At the top-level of your project run:

    direnv allow
    

    The next time you will launch your terminal and enter the top-level of your project, direnv will check for changes and will automatically load the Devbox environment.

  6. Install dependencies

    make install
    
  7. Start environment

    make shell
    

    This will starts a preconfigured Tmux session. Please see the .tmuxinator.yml file.

Makefile Targets

Please see the Makefile for the full list of targets.

Docker Swarm Setup

To set up Docker Swarm, you'll first need to ensure you have Docker installed on your machines. Then, you can initialize Docker Swarm on one of your machines to act as the manager node, and join other machines as worker nodes. Below are the general steps to set up Docker Swarm:

  1. Install Docker

    Make sure Docker is installed on all machines that will participate in the Swarm cluster. You can follow the official Docker installation guide for your operating system.

  2. Choose Manager Node

    Select one of your machines to act as the manager node. This machine will be responsible for managing the Swarm cluster.

  3. Initialize Swarm

    SSH into the chosen manager node and run the following command to initialize Docker Swarm:

    docker swarm init --advertise-addr <MANAGER_IP>
    

    Replace <MANAGER_IP> with the IP address of the manager node. This command initializes a new Docker Swarm cluster with the manager node.

  4. Join Worker Nodes

    After initializing the Swarm, Docker will output a command to join other nodes to the cluster as worker nodes. Run this command on each machine you want to join as a worker node.

    docker swarm join --token <TOKEN> <MANAGER_IP>:<PORT>
    

    Replace <TOKEN> with the token generated by the docker swarm init command and <MANAGER_IP>:<PORT> with the IP address and port of the manager node.

  5. Verify Swarm Status

    Once all nodes have joined the Swarm, you can verify the status of the Swarm by running the following command on the manager node:

    docker node ls
    

    This command will list all nodes in the Swarm along with their status.

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

seto-3.0.1.tar.gz (25.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

seto-3.0.1-py3-none-any.whl (40.3 kB view details)

Uploaded Python 3

File details

Details for the file seto-3.0.1.tar.gz.

File metadata

  • Download URL: seto-3.0.1.tar.gz
  • Upload date:
  • Size: 25.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.8

File hashes

Hashes for seto-3.0.1.tar.gz
Algorithm Hash digest
SHA256 b2910b5e0737d93d491fb25214fb62910fd1e36d20b0ba3bbc10719f91b76194
MD5 542272ecd5633773fa551188b7cee701
BLAKE2b-256 e9e7e4e558fb9de5dbe8e51aad071e1abcf776dc68844c109651cbd815772afc

See more details on using hashes here.

File details

Details for the file seto-3.0.1-py3-none-any.whl.

File metadata

  • Download URL: seto-3.0.1-py3-none-any.whl
  • Upload date:
  • Size: 40.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.8

File hashes

Hashes for seto-3.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 372d2024556b8573f667bf76d78d00af24813aa3244bfae7aafc3993cf352a8a
MD5 bab64a036414ce87c4bfe02ab1ed9920
BLAKE2b-256 111f3bab8861c2e99e69b1e3a8313170a63949ef8fcab3de4b54ac121c01c208

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page