Skip to main content

Hecaton distributed compute framework

Project description

Hecaton

Hecaton is a solution for self-hosted GPU access from a source that isn't deployed. It allows you to distribute compute jobs across multiple workers (GPUs).

Overview

Installation

Hecaton can be installed as a Python package.

pip install hecaton

Note: For GPU workers, you must run the installation and the worker process as root because it uses docker python client which requires root access to interact with the Docker daemon.

Quick Start

1. Server Setup

Start the server. On the first run, you should provide an admin password via environment variable to create the initial admin user.

# Create initial admin user (runs on port 8181 by default)
HECATON_ADMIN_PASS=mysecurepassword hecaton-server --port 8181

Once the admin is created, you can simply run:

hecaton-server --port 8181

SSL/TLS (Recommended):

hecaton-server --port 8181 --ssl-keyfile key.pem --ssl-certfile cert.pem

Server Startup

2. Client Setup

Configure the client to point to your server using the CLI:

# Register the server locally
hecaton server new --ip <server_ip>:8181 --name my-server

# Log in (get the token)
hecaton server login my-server --username admin --password mysecurepassword

Client Login

3. Worker Setup

To add a GPU worker to the cluster:

Requirements:

  • Docker must be installed and running.
  • Must run as root.
sudo pip install hecaton
# Connect the worker (as root)
sudo hecaton-gpu install <server_ip>:8181

# You will be prompted for a username and password (e.g., admin credentials or a dedicated worker user)

The worker will automatically register with the server and start processing jobs.

4. Job Submission

Submit a job with a payload file and a target image:

hecaton job new <path_to_payload_file> <docker_image_name>

5. Programmatic Job Submission

You can also submit and monitor jobs using the Python client library:

import time
from hecaton.client.managers.api import HecatonServer

server_ip = "http://localhost:8181"
token = "YOUR_JWT_TOKEN"

# Submit the job (payload.json is read and sent to the worker)
job_id = HecatonServer.new_job(
    ip=server_ip, secret=token, 
    file_path="payload.json", image="my-custom-image:latest"
)
print(f"Submitted Job ID: {job_id}")

# Poll for completion
while True:
    job_info = HecatonServer.get_job(server_ip, token, job_id)
    if job_info[1] in ("COMPLETED", "FAILED"):
        print("Result:", job_info[2])
        break
    time.sleep(2)

6. Writing a Payload and Custom Docker Image

Hecaton uses a long-running container model to process jobs. When you submit a job to a specific Docker image, Hecaton ensures a container of that image is running and passes the job payload to it.

To easily write the python code for your custom Docker image, you should use Hecaton's built-in serverless module.

Example handler.py (Entrypoint for your Docker image):

from hecaton.serverless import start, ServerLessInput

def my_handler(data: ServerLessInput):
    # 'data.input' contains the payload you submitted via the CLI or API (e.g., payload.json)
    job_payload = data.input
    
    print(f"Processing job with data: {job_payload}")
    
    # Perform your GPU compute or other tasks here
    result = {"status": "success", "processed_data": job_payload}
    
    # Return the result dict, it will be uploaded back to the server
    return result

if __name__ == "__main__":
    # 'start()' keeps the container running and polls the worker for jobs
    start(my_handler)

Dockerfile Example:

FROM python:3.9-slim
RUN pip install hecaton
COPY handler.py /app/handler.py
CMD ["python", "/app/handler.py"]

Once this image is built, pushed, and registered on the Hecaton server, you can submit payloads to it.

Command Reference

Client CLI (hecaton)

The hecaton command is your main interface for interacting with the server.

Server Management (hecaton server)

Manage your local server connections.

  • new: Register a new server.
    • --ip: Server IP/URL (e.g., localhost:8181).
    • --name: Local alias for the server.
  • login: Authenticate with a server.
    • server_name: The alias of the server.
    • --username, --password: Credentials.
  • list: List registered servers.
  • connect: Switch the active server context.
  • disconnect: Unset the active server.
  • show: View details of a server.
  • update: Update server configuration (e.g., IP).

User Management (hecaton user)

Manage users (Requires Admin privileges).

  • create: Create a new user.
    • --username: New user's name.
    • --password: New user's password.
    • --role: Role (default: user).

Worker Management (hecaton worker)

Monitor connected workers.

  • list: List all workers, their status, and last active time.

Image Management (hecaton image)

Manage Docker images available for jobs.

  • new: Register a new Docker image from Docker Hub.
    • image_name: e.g., python:3.9-slim.
  • list: List registered images.
  • update: Update image metadata.
    • --fp: Path to a file containing environment variables (KEY=VALUE per line).
    • --desc: Update description.
  • show: Show image details including environment variables.

Job Management (hecaton job)

Submit and track jobs.

  • new: Submit a new job.
    • file_path: Path to the script/payload to run.
    • image_name: The Docker image to use.
  • list: List recent jobs and their statuses.
  • show: View full details of a specific job (ID, status, payload, index).

GPU Worker CLI (hecaton-gpu)

The hecaton-gpu command runs on the worker machine to manage the background service. Must be run as root.

  • install <server_ip>: Install the worker as a systemd service.
    • prompts for username/password authentication.
  • uninstall: Remove the systemd service.
  • start: Start the worker service.
  • stop: Stop the worker service.
  • restart: Restart the worker service.
  • status: Check if the service is running.
  • logs: View service logs.
    • -f: Follow logs.
    • -n: Number of lines.
  • run <server_ip>: Run the worker in the foreground (useful for debugging).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hecaton-0.6.10.tar.gz (34.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hecaton-0.6.10-py3-none-any.whl (37.6 kB view details)

Uploaded Python 3

File details

Details for the file hecaton-0.6.10.tar.gz.

File metadata

  • Download URL: hecaton-0.6.10.tar.gz
  • Upload date:
  • Size: 34.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hecaton-0.6.10.tar.gz
Algorithm Hash digest
SHA256 e9d0816be458da3745b665715140d243a2bdf1226cb3267846f7bdd6474a976f
MD5 1c79133e15d49a26835846dcfc96d444
BLAKE2b-256 f6c9aef1b1bb5e3fc63dbc7568652af0b97a572b56237ae49ead28a069fea88d

See more details on using hashes here.

Provenance

The following attestation bundles were made for hecaton-0.6.10.tar.gz:

Publisher: publish.yml on Just1truc/hecaton

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hecaton-0.6.10-py3-none-any.whl.

File metadata

  • Download URL: hecaton-0.6.10-py3-none-any.whl
  • Upload date:
  • Size: 37.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hecaton-0.6.10-py3-none-any.whl
Algorithm Hash digest
SHA256 1d141085a20a9d64025b13bf5c0fd28a9ddacaaef0cdc1d23a03f1ce92ca54c7
MD5 8bdc02231f13e606710ad0b19afb4d20
BLAKE2b-256 6b4605f805c912a3bbe6dbce3b919785607a50330228adb97086f864bce1c1cf

See more details on using hashes here.

Provenance

The following attestation bundles were made for hecaton-0.6.10-py3-none-any.whl:

Publisher: publish.yml on Just1truc/hecaton

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page