Skip to main content

Orchestrate distributed swarms of AI agents that collaboratively solve complex tasks.

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

Hivemind

Hivemind

Distributed AI Swarm Runtime

PyPI version License: GPL v3 Python 3.12+

Hivemind is a distributed AI swarm runtime for coordinating large numbers of AI agents across complex tasks. Orchestrate multi-agent systems with a swarm execution model: tasks are decomposed into subtasks, executed in parallel, and coordinated through a scheduler and dependency graph.

Install: PyPI package is hivemind-ai; CLI command is hivemind.


Features

  • PlannerSchedulerExecutorAgents — DAG-based task execution with configurable parallelism
  • Strategy-based planning — Auto-selected strategies (research, code analysis, data science, document, experiment) output DAGs; fallback to LLM planning
  • 120+ tools — Research, coding, data science, documents, experiments, memory, filesystem; smart tool selection (top-k by similarity to task)
  • TOML configurationhivemind.toml / workflow.hivemind.toml with Pydantic validation; env > project > user > defaults
  • Memory & knowledge graph — Episodic, semantic, research, artifact memory; summarization, namespaces, scoring; entity search and relationship traversal
  • Map-reduce runtimeswarm.map_reduce(dataset, map_fn, reduce_fn) using the worker pool
  • Workflows — Define steps in workflow.hivemind.toml; run with hivemind workflow <name>
  • Plugin ecosystem — Discover tools via entry_points (hivemind.plugins)
  • Provider routing — OpenAI, Anthropic, Azure, Gemini, GitHub Models (Copilot) (model name or provider:model spec)
  • Automatic model routing — Set planner = "auto" and worker = "auto" in config for cost/latency/quality-aware selection
  • EventLog, replay, telemetry — Structured events for debugging and metrics
  • CLI & TUIhivemind init, hivemind doctor, hivemind run, hivemind research, hivemind analyze, hivemind memory, hivemind query, hivemind workflow, hivemind tui with dashboard (tasks, swarm graph, memory, activity feed, knowledge graph, logs)

Architecture

    Planner
       ↓
    Scheduler
       ↓
    Executor
       ↓
    Agents  →  Tools  →  Memory  →  Knowledge Graph

Quickstart

Install (Python 3.12+):

pip install hivemind-ai
# or: uv add hivemind-ai

Quick setup (new project):

export GITHUB_TOKEN=...   # or OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
hivemind init
hivemind run "analyze this repository"

Run a task:

hivemind run "Summarize swarm intelligence in one paragraph."

Use in code (with config file):

from hivemind import Swarm

swarm = Swarm(config="hivemind.toml")
results = swarm.run("Analyze diffusion models and write a one-page summary.")

Or with explicit parameters:

from hivemind import Swarm

swarm = Swarm(worker_count=4, worker_model="gpt-4o-mini", planner_model="gpt-4o-mini", use_tools=True)
results = swarm.run("Analyze diffusion models and write a one-page summary.")

Set API keys via environment or config (see Configuration below).


CLI usage

Command Description
hivemind init Set up a new project (creates hivemind.toml, example workflow, dataset folder)
hivemind doctor Verify environment (GITHUB_TOKEN, OpenAI keys, config file, tool registry)
hivemind run "task" Run swarm with the given task
hivemind tui Launch terminal UI (prompt + output + dashboard)
hivemind research papers/ Literature review on a directory of papers
hivemind analyze repo/ Analyze repository architecture
hivemind memory [--limit N] List memory entries
hivemind query "diffusion models" Query knowledge graph (entity search, relationships)
hivemind workflow <name> Run a workflow by name (from workflow.hivemind.toml)

TUI usage

hivemind tui
  • Prompt — Type a task and press Enter or r to run.
  • Output — Response and step status (e.g. “Planning…”, “Step 2 of 5…”).
  • Dashboard (d) — Tasks, swarm graph, memory, activity feed, knowledge graph viewer, logs.
  • Keys: r Run, d Dashboard, Esc Unfocus, o Output, q Quit.

Examples

Workflow Command
Literature review hivemind research papers/ or uv run python examples/research/literature_review.py [dir]
Repository analysis hivemind analyze . or uv run python examples/coding/analyze_repository.py [path]
Dataset analysis uv run python examples/data_science/dataset_analysis.py [path-to.csv]
Document intelligence uv run python examples/documents/analyze_documents.py [dir]
Parameter sweep uv run python examples/experiments/parameter_sweep.py --params '{"lr":[0.01,0.1]}'

Outputs go to examples/output/. Run from project root when using script paths.


Demo GIF

To create a demo GIF showing swarm execution and task progress:

  1. Start the TUI: hivemind tui
  2. Use a screen recorder (e.g. asciinema, LICEcap, or terminal GIF tools) to record:
    • Typing a task in the prompt (e.g. “Summarize swarm intelligence in one paragraph”)
    • Pressing Enter to run
    • The spinner and step status (Planning…, Executing step 1 of N…)
    • The final response appearing
    • Optionally pressing d to open the Dashboard (tasks, swarm graph, memory, logs)
  3. Export the recording as a GIF and add it to the README or docs (e.g. ![Demo](docs/demo.gif)).

Example asciinema:

asciinema rec demo.cast
# run: hivemind tui, then run a task and optionally open dashboard
# exit TUI (q), then Ctrl-D to stop rec
# convert: asciinema-agg demo.cast demo.gif  # or use asciinema’s playback + another tool for GIF

Configuration

Config order: env > project config > user ~/.config/hivemind/config.toml > defaults.

Config locations: ./hivemind.toml, ./workflow.hivemind.toml, ~/.config/hivemind/config.toml, or legacy .hivemind/config.toml.

GitHub Models (Copilot): Use provider:model and set GITHUB_TOKEN. Example: github:gpt-4o, github:claude-3.5-sonnet, github:phi-3.

Automatic model routing: Set planner = "auto" and worker = "auto" in [models]; the router picks a model by task type (planning → high quality, fast → low cost).

Example hivemind.toml (v1 format):

[swarm]
workers = 6
adaptive_planning = true
max_iterations = 10

[models]
planner = "auto"
worker = "auto"
# Or explicit: planner = "azure:gpt-4o", worker = "github:gpt-4o"

[memory]
enabled = true
store_results = true
top_k = 5

[tools]
enabled = ["research", "coding", "documents"]
top_k = 12

[telemetry]
enabled = true
save_events = true

[providers.azure]
endpoint = ""
deployment = ""

Legacy format (still supported): use [default] with worker_model, planner_model, events_dir, data_dir.

Env overrides: HIVEMIND_WORKER_MODEL, HIVEMIND_PLANNER_MODEL, HIVEMIND_EVENTS_DIR, HIVEMIND_DATA_DIR, plus provider keys (OPENAI_API_KEY, AZURE_OPENAI_*, etc.). See docs/providers.md, docs/configuration.md, and docs/development.md.


Documentation

Doc Description
Introduction What Hivemind is, problem it solves, core concepts
Architecture Planner, Scheduler, Executor, Agents, Tools, Memory, strategies, config, map-reduce
Configuration TOML config (v1), schema, locations, env overrides
Swarm runtime Task lifecycle, flow, code snippets, map-reduce
Tools Tool architecture, registry, runner, smart tool selection, plugins
Memory Memory types, store, retrieval, summarization, namespaces, scoring
Providers Provider routing, model spec, Azure
CLI All CLI commands (run, tui, research, analyze, memory, query, workflow)
TUI Layout, panels (activity feed, knowledge graph), keyboard shortcuts
Examples Example workflows and commands
Development Project structure, adding tools/plugins/workflows, setup
Contributing Setup, testing, code style, PR guidelines
FAQ Common questions

Contributing

Contributions are welcome. See CONTRIBUTING.md for setup, testing, and PR guidelines.


License

Hivemind is licensed under GPL-3.0-or-later. See LICENSE for the full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hivemind_ai-1.1.0.tar.gz (2.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hivemind_ai-1.1.0-py3-none-any.whl (265.0 kB view details)

Uploaded Python 3

File details

Details for the file hivemind_ai-1.1.0.tar.gz.

File metadata

  • Download URL: hivemind_ai-1.1.0.tar.gz
  • Upload date:
  • Size: 2.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hivemind_ai-1.1.0.tar.gz
Algorithm Hash digest
SHA256 3ae2745cec3add95cf633ed82b3234a9b9408d60cb0194d6d97782eaca6c741c
MD5 3cc7309d995335fd289e60bb674f1a5c
BLAKE2b-256 2a9379a061a68978d151d28772e6cde32329e61a161210578024374c5fcbae08

See more details on using hashes here.

Provenance

The following attestation bundles were made for hivemind_ai-1.1.0.tar.gz:

Publisher: pypi-publish.yml on rithulkamesh/hivemind

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hivemind_ai-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: hivemind_ai-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 265.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hivemind_ai-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 01fc0dd35f115093d3a347eb19d1311c2e2171180ea17f7b82560438d41b30dc
MD5 59ce42c979b1f41cafa4893c81ff066c
BLAKE2b-256 aba16dc6f4f209b95296501b0c099482e73a2af28a85e4237aa232be6589e23c

See more details on using hashes here.

Provenance

The following attestation bundles were made for hivemind_ai-1.1.0-py3-none-any.whl:

Publisher: pypi-publish.yml on rithulkamesh/hivemind

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page