Skip to main content

A Docker Swarm Deployment Manager

Project description

Ṣeto

Ṣeto is a command-line tool designed to assist with setting up and managing shared storage volumes using NFS or GlusterFS drivers. It simplifies the process of configuring stack-based deployments, setting up manager and replica nodes, creating and syncing shared volumes, and mounting and unmounting these volumes.

Features

  • Compose Command: Resolves Docker Compose files.
  • Setup Command: Sets up manager and replica nodes.
  • Create Volumes Command: Creates and syncs shared volumes across nodes.
  • Mount Volumes Command: Mounts shared volumes on specified nodes.
  • Unmount Volumes Command: Unmounts shared volumes from specified nodes.

Usage

The main entry point for Ṣeto is the seto command. Below is a detailed description of each subcommand and its options.

Global Options

These options are applicable to all subcommands:

  • --stack: Required. Specifies the stack name.
  • --driver: Required. Specifies the driver URI to use. Can be nfs://username:password@hostname or gluster://username:password@hostname.

Subcommands

1. Compose Command

Resolves Docker Compose files.

seto --stack <stack-name> --driver <driver-uri> compose

Example:

seto --stack my-stack --driver nfs://user:pass@host compose
2. Setup Command

Sets up the manager and replica nodes.

seto --stack <stack-name> --driver <driver-uri> setup --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes to set up in the format username:password@hostname.

Example:

seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
3. Create Volumes Command

Creates and syncs shared volumes across nodes.

seto --stack <stack-name> --driver <driver-uri> create-volumes --replica <replica-connection-strings> [--force]
  • --replica: Required. Specifies the nodes where volumes will be created.
  • --force: Optional. Forces volume data synchronization.

Example:

seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
4. Mount Volumes Command

Mounts shared volumes on specified nodes.

seto --stack <stack-name> --driver <driver-uri> mount-volumes --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes where volumes will be mounted.

Example:

seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
5. Unmount Volumes Command

Unmounts shared volumes from specified nodes.

seto --stack <stack-name> --driver <driver-uri> unmount-volumes --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes where volumes will be unmounted.

Example:

seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2

Example Workflow

  1. Setup Manager and Replica Nodes
seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
  1. Create Volumes
seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
  1. Mount Volumes
seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
  1. Unmount Volumes
seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2
  1. Deploy Stack
seto --stack my-stack --manager nfs://user@manager-host deploy

Error Handling

The tool includes basic error handling to catch and report errors related to argument parsing and execution. If an error occurs, a message will be printed, and the tool will exit with a non-zero status code.

Environment Setup

  1. See cloud-init.yaml file for prerequisites to install.

  2. Install Devbox

  3. Install direnv with your OS package manager

  4. Hook it direnv into your shell

  5. Load environment

    At the top-level of your project run:

    direnv allow
    

    The next time you will launch your terminal and enter the top-level of your project, direnv will check for changes and will automatically load the Devbox environment.

  6. Install dependencies

    make install
    
  7. Start environment

    make shell
    

    This will starts a preconfigured Tmux session. Please see the .tmuxinator.yml file.

Makefile Targets

Please see the Makefile for the full list of targets.

Docker Swarm Setup

To set up Docker Swarm, you'll first need to ensure you have Docker installed on your machines. Then, you can initialize Docker Swarm on one of your machines to act as the manager node, and join other machines as worker nodes. Below are the general steps to set up Docker Swarm:

  1. Install Docker

    Make sure Docker is installed on all machines that will participate in the Swarm cluster. You can follow the official Docker installation guide for your operating system.

  2. Choose Manager Node

    Select one of your machines to act as the manager node. This machine will be responsible for managing the Swarm cluster.

  3. Initialize Swarm

    SSH into the chosen manager node and run the following command to initialize Docker Swarm:

    docker swarm init --advertise-addr <MANAGER_IP>
    

    Replace <MANAGER_IP> with the IP address of the manager node. This command initializes a new Docker Swarm cluster with the manager node.

  4. Join Worker Nodes

    After initializing the Swarm, Docker will output a command to join other nodes to the cluster as worker nodes. Run this command on each machine you want to join as a worker node.

    docker swarm join --token <TOKEN> <MANAGER_IP>:<PORT>
    

    Replace <TOKEN> with the token generated by the docker swarm init command and <MANAGER_IP>:<PORT> with the IP address and port of the manager node.

  5. Verify Swarm Status

    Once all nodes have joined the Swarm, you can verify the status of the Swarm by running the following command on the manager node:

    docker node ls
    

    This command will list all nodes in the Swarm along with their status.

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

seto-2.2.2.tar.gz (25.5 kB view details)

Uploaded Source

Built Distribution

seto-2.2.2-py3-none-any.whl (40.0 kB view details)

Uploaded Python 3

File details

Details for the file seto-2.2.2.tar.gz.

File metadata

  • Download URL: seto-2.2.2.tar.gz
  • Upload date:
  • Size: 25.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.9

File hashes

Hashes for seto-2.2.2.tar.gz
Algorithm Hash digest
SHA256 f0d230cdaff73dce44a1a1bbf338a2661c225852a122cf5bf0b76d3c79f5147f
MD5 8935e0dbc9134a5cbc9e4dbceabc8d45
BLAKE2b-256 5a3a3c80ead935d89f2185c999b5c7bc90e42104fb561438d3c196a727230f09

See more details on using hashes here.

File details

Details for the file seto-2.2.2-py3-none-any.whl.

File metadata

  • Download URL: seto-2.2.2-py3-none-any.whl
  • Upload date:
  • Size: 40.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.11.9

File hashes

Hashes for seto-2.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d9e92317c3257907511e1407ab5a914308ad5496ff556d12d27831b1f02030a8
MD5 22e8370d3ad4c8266778d132f245e781
BLAKE2b-256 e4d1bf088c1eee86f64e99bc068be7f7cc3a5091d317a9ac6649c6a77adaed8a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page