Skip to main content

Momahub: Hub-and-spoke distributed AI inference network.

Project description

Momahub - Hub-and-spoke distributed AI inference network

Users submit requests to a Hub, the Hub dispatches them to Agent nodes running Ollama.

Requirements

  • Python >= 3.11
  • Ollama on every agent node
  • GPU recommended (CPU-only works but will be slow)

Quick start

# Install Ollama on all GPU nodes
curl -fsSL https://ollama.com/install.sh | sh
bash scripts/pull_ollama.sh

# Create virtualenv
conda create -n moma python=3.11
conda activate moma

# Git clone
git clone https://github.com/digital-duck/momahub.py.git

# Install from source
pip install -e .

# (Re)Start the hub on a GPU node
moma hub up

# Get hub node IP address, to be used by an agent node to join the hub
hostname -I 
# hub-ip-address

# (Re)Start an agent on all GPU nodes including hub node
moma join http://<hub-ip-address>:8000

# Submit a task from any GPU node
moma submit "Explain distributed inference in two sentences" --model <model of your choice pulled before>

# Monitor Momahub status
moma status
moma agents
moma tasks
moma rewards

# Get Help
moma --help

Features

Momahub provides a robust suite of tools for distributed AI inference. For a detailed list of current capabilities and our upcoming development roadmap (including Go re-implementation and cryptographic security), please see FEATURES.md.

  • Hub-and-spoke dispatch — automatic agent selection by compute tier, VRAM, and model availability
  • Multi-hub clustering — peer hubs share capabilities and forward tasks across the network
  • Compute tiers — agents ranked PLATINUM / GOLD / SILVER / BRONZE by measured tokens-per-second
  • Reward ledger — tracks operator contributions (tasks completed, tokens generated, credits earned)
  • SPL integration — run structured prompt programs on the grid with ON GRID / WITH VRAM syntax
  • Streamlit dashboard — real-time overview, grid monitor, rewards, SPL runner, Text2SPL, and Paper Digest
  • CLI (moma) — full grid management from the terminal

Proven Performance

Momahub has been validated in real-world LAN environments:

  • 3-GPU Milestone (2026-03-08): Successfully deployed across 3 GPU nodes using 2 NVIDIA GTX 1080 Ti (11GB VRAM) + 1 NVIDIA GTX 1050 Ti (4GB VRAM). Achieved 100% completion rate on burst stress tests with automated agent-side queueing and hub-level load balancing.
  • Tiers: Measured between 50 and 100 TPS on benchmarked models.

Codebase layout

igrid/
  schema/      Pydantic models (enums, handshake, pulse, task, reward, cluster)
  hub/         Hub FastAPI app (db, state, dispatcher, cluster, monitor)
  agent/       Agent FastAPI app (hardware detection, LLM runner, telemetry)
  spl/         SPL adapter and runner
  cli/         moma CLI
  ui/          Streamlit app
docs/          User-Guide, SPL arXiv paper
cookbook/      Ready-to-run recipes
scripts/       Utility scripts
tests/         Unit and integration tests

Documentation

Research

Momahub - the python implementation for this upcoming arxiv paper (in preparation)

Momahub: A Prompt Compiler and Decentralized LLM Inference Network Wen G. Gong (2026)

The paper introduces two key ideas:

  1. The Prompt Compiler — reframing Text2SPL as a full compiler pipeline (front-end NL→SPL, mid-end CTE DAG optimisation, back-end model/VRAM mapping), with SPL as the intermediate representation between human intent and GPU execution. The compiler is self-hosting: it runs on the Momahub it compiles for.

  2. The Distributed Inference Runtime — Momahub as the runtime layer that abstracts distributed consumer GPUs into a programmable compute surface, analogous to the JVM or the Linux kernel for traditional computing.

Related work

  • SPL - Structured Prompt Language: arXiv:2602.21257

    Wen G. Gong. (2026). Structured Prompt Language: Declarative Context Management for LLMs. arXiv preprint arXiv:2602.21257.

    @article{gong2026spl,
      title={Structured Prompt Language: Declarative Context Management for LLMs},
      author={Gong, Wen G.},
      journal={arXiv preprint arXiv:2602.21257},
      year={2026}
    }
    
  • Geodesic Reranking: arXiv:2602.15860

    Wen G. Gong. (2026). Reranker Optimization via Geodesic Distances on k-NN Manifolds. arXiv preprint arXiv:2602.15860.

    @article{gong2026geodesic,
      title={Reranker Optimization via Geodesic Distances on k-NN Manifolds},
      author={Gong, Wen G.},
      journal={arXiv preprint arXiv:2602.15860},
      year={2026}
    }
    

Contributing

We welcome contributions! Please see CONTRIBUTING.md for details on our development workflow and coding standards.

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

momahub-0.2.5.tar.gz (61.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

momahub-0.2.5-py3-none-any.whl (75.5 kB view details)

Uploaded Python 3

File details

Details for the file momahub-0.2.5.tar.gz.

File metadata

  • Download URL: momahub-0.2.5.tar.gz
  • Upload date:
  • Size: 61.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for momahub-0.2.5.tar.gz
Algorithm Hash digest
SHA256 aced18e846e3ba63283b7e14ee198f09d560aa92e88a37bb749f7f67e7671d7d
MD5 8c9b02ec6c395fff195f19338957e92f
BLAKE2b-256 fccab1c72739da3ba2145262dc9f7f8214610a523459301906b1e2ac55448b1e

See more details on using hashes here.

File details

Details for the file momahub-0.2.5-py3-none-any.whl.

File metadata

  • Download URL: momahub-0.2.5-py3-none-any.whl
  • Upload date:
  • Size: 75.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for momahub-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 0d75309052c3926306da60d9330e34cbf6ced8ce67d7b14cac2e2e1966d05b54
MD5 36bf69c0f082cc4e883c8f1a4a4490b1
BLAKE2b-256 e42688632585f297837eae489fe0faea45b79d1642ecf1e45a214890b31d1fe5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page