Skip to main content

A Docker Swarm Deployment Manager

Project description

Ṣeto

Ṣeto is a command-line tool designed to assist with setting up and managing shared storage volumes using NFS driver. It simplifies the process of configuring stack-based deployments, setting up manager and replica nodes, creating and syncing shared volumes, and mounting and unmounting these volumes.

Features

  • Compose Command: Resolves Docker Compose files.
  • Setup Command: Sets up manager and replica nodes.
  • Create Volumes Command: Creates and syncs shared volumes across nodes.
  • Mount Volumes Command: Mounts shared volumes on specified nodes.
  • Unmount Volumes Command: Unmounts shared volumes from specified nodes.

Usage

The main entry point for Ṣeto is the seto command. Below is a detailed description of each subcommand and its options.

Global Options

These options are applicable to all subcommands:

  • --stack: Required. Specifies the stack name.
  • --driver: Required. Specifies the driver URI to use. Example: nfs://username:password@hostname

Subcommands

1. Compose Command

Resolves Docker Compose files.

seto --stack <stack-name> --driver <driver-uri> compose

Example:

seto --stack my-stack --driver nfs://user:pass@host compose
2. Setup Command

Sets up the manager and replica nodes.

seto --stack <stack-name> --driver <driver-uri> setup --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes to set up in the format username:password@hostname.

Example:

seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
3. Create Volumes Command

Creates and syncs shared volumes across nodes.

seto --stack <stack-name> --driver <driver-uri> create-volumes --replica <replica-connection-strings> [--force]
  • --replica: Required. Specifies the nodes where volumes will be created.
  • --force: Optional. Forces volume data synchronization.

Example:

seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
4. Mount Volumes Command

Mounts shared volumes on specified nodes.

seto --stack <stack-name> --driver <driver-uri> mount-volumes --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes where volumes will be mounted.

Example:

seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
5. Unmount Volumes Command

Unmounts shared volumes from specified nodes.

seto --stack <stack-name> --driver <driver-uri> unmount-volumes --replica <replica-connection-strings>
  • --replica: Required. Specifies the nodes where volumes will be unmounted.

Example:

seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2

Example Workflow

  1. Setup Manager and Replica Nodes
seto --stack my-stack --driver nfs://user:pass@host setup --replica user:pass@replica1 user:pass@replica2
  1. Create Volumes
seto --stack my-stack --driver nfs://user:pass@host create-volumes --replica user:pass@replica1 user:pass@replica2 --force
  1. Mount Volumes
seto --stack my-stack --driver nfs://user:pass@host mount-volumes --replica user:pass@replica1 user:pass@replica2
  1. Unmount Volumes
seto --stack my-stack --driver nfs://user:pass@host unmount-volumes --replica user:pass@replica1 user:pass@replica2
  1. Deploy Stack
seto --stack my-stack --manager nfs://user@manager-host deploy

Error Handling

The tool includes basic error handling to catch and report errors related to argument parsing and execution. If an error occurs, a message will be printed, and the tool will exit with a non-zero status code.

Environment Setup

  1. See cloud-init.yaml file for prerequisites to install.

  2. Install Devbox

  3. Install direnv with your OS package manager

  4. Hook it direnv into your shell

  5. Load environment

    At the top-level of your project run:

    direnv allow
    

    The next time you will launch your terminal and enter the top-level of your project, direnv will check for changes and will automatically load the Devbox environment.

  6. Install dependencies

    make install
    
  7. Start environment

    make shell
    

    This will starts a preconfigured Tmux session. Please see the .tmuxinator.yml file.

Makefile Targets

Please see the Makefile for the full list of targets.

Docker Swarm Setup

To set up Docker Swarm, you'll first need to ensure you have Docker installed on your machines. Then, you can initialize Docker Swarm on one of your machines to act as the manager node, and join other machines as worker nodes. Below are the general steps to set up Docker Swarm:

  1. Install Docker

    Make sure Docker is installed on all machines that will participate in the Swarm cluster. You can follow the official Docker installation guide for your operating system.

  2. Choose Manager Node

    Select one of your machines to act as the manager node. This machine will be responsible for managing the Swarm cluster.

  3. Initialize Swarm

    SSH into the chosen manager node and run the following command to initialize Docker Swarm:

    docker swarm init --advertise-addr <MANAGER_IP>
    

    Replace <MANAGER_IP> with the IP address of the manager node. This command initializes a new Docker Swarm cluster with the manager node.

  4. Join Worker Nodes

    After initializing the Swarm, Docker will output a command to join other nodes to the cluster as worker nodes. Run this command on each machine you want to join as a worker node.

    docker swarm join --token <TOKEN> <MANAGER_IP>:<PORT>
    

    Replace <TOKEN> with the token generated by the docker swarm init command and <MANAGER_IP>:<PORT> with the IP address and port of the manager node.

  5. Verify Swarm Status

    Once all nodes have joined the Swarm, you can verify the status of the Swarm by running the following command on the manager node:

    docker node ls
    

    This command will list all nodes in the Swarm along with their status.

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

seto-3.3.0.tar.gz (25.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

seto-3.3.0-py3-none-any.whl (39.0 kB view details)

Uploaded Python 3

File details

Details for the file seto-3.3.0.tar.gz.

File metadata

  • Download URL: seto-3.3.0.tar.gz
  • Upload date:
  • Size: 25.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.8

File hashes

Hashes for seto-3.3.0.tar.gz
Algorithm Hash digest
SHA256 d167f6c4b5f042b1840825e36ad00cfe6a6a0a91e7d449fe1d0bb82f4a491812
MD5 b1cf926ed2c330314875c472647c1cbc
BLAKE2b-256 106458e64d4bf876b4a355d6493ba76807219f513e9af4d367c97f09984f033e

See more details on using hashes here.

File details

Details for the file seto-3.3.0-py3-none-any.whl.

File metadata

  • Download URL: seto-3.3.0-py3-none-any.whl
  • Upload date:
  • Size: 39.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.8

File hashes

Hashes for seto-3.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4557a7f5090f33aee204173250a26515681ba43626268cc8234b0e6a3a1692c2
MD5 338d9c3f8a18a3b2f43a56604799c0c8
BLAKE2b-256 7169ad4c3affe97c9c787ab1a27efa7964c61ab2b56fe8d3ad344d8d47dcde3d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page