Distributed AI framework โ grow your own cluster from whatever hardware you've got
Project description
๐ mycoSwarm
Distributed AI for everyone. Turn forgotten hardware into a thinking network.
mycoSwarm connects your machines โ old laptops, mini PCs, Raspberry Pis, GPU workstations โ into a single AI swarm. No cloud. No API keys. No data leaves your network.
curl -fsSL https://raw.githubusercontent.com/msb-msb/mycoSwarm/main/scripts/install.sh | bash
mycoswarm chat
That's it. Two commands. You're running local AI.
Dashboard
Live swarm monitoring โ 5 nodes, 86.6 GB RAM, all from rescued hardware under $1,100.
What It Does
One machine? Chat with local models instantly โ no daemon, no config.
Multiple machines? They find each other automatically via mDNS, share capabilities, and route tasks to the right hardware. A $50 mini PC can chat with a 27B model running on a GPU across the room.
The weakest machine in the swarm gets access to the strongest model.
Real Example: 5-Node Swarm
| Node | Hardware | Cost | Role |
|---|---|---|---|
| Miu | RTX 3090, 64GB RAM | ~$850 (used) | GPU inference โ runs 27B models |
| naru | Lenovo M710Q, 8GB RAM | $50 | Web search, file processing |
| uncho | Lenovo M710Q, 8GB RAM | $50 | Web search, coordination |
| boa | Lenovo M710Q, 8GB RAM | $50 | Web search, code execution |
| raspberrypi | Raspberry Pi 2, 1GB RAM | $35 | Search, lightweight tasks |
Total: ~$1,035. Zero monthly fees.
Features
Chat with memory โ Persistent facts and session history across conversations. Your AI remembers what you tell it.
Research โ Ask a question, the swarm plans multiple searches, distributes them across CPU workers in parallel, and synthesizes a cited answer on the GPU. Faster than any single machine.
Document library (RAG) โ Drop files into ~/mycoswarm-docs/. The swarm indexes them and answers questions about your documents with citations.
Agentic tool routing โ The model automatically decides when it needs web search or document lookup, shows you what it's doing, and uses the results. No manual tool selection.
Honest AI โ When it doesn't know something, it says so. No hallucinated weather forecasts or fabricated facts.
Identity โ Persistent self-model with first-run naming. Your AI remembers its own name across sessions.
Self-awareness (8 C's) โ Real-time vital signs after every response: Calm, Clarity, Curiosity, Compassion, Courage, Creativity, Connectedness, Confidence. Derived from pipeline signals, not simulated.
Wu Wei Timing Gate โ Contextual response calibration. Late night โ shorter, warmer. Exploration mode โ deeper, expansive. No LLM call, pure heuristics.
Procedural memory โ The swarm learns from experience. Wisdom procedures surface automatically when similar problems recur.
Intent classification โ Pre-inference routing decides tool, mode, and scope before the model runs.
Plugin system โ Drop a folder into ~/.config/mycoswarm/plugins/ and your node advertises a new capability. No core code changes.
Install
Quick Start (Linux or macOS)
curl -fsSL https://raw.githubusercontent.com/msb-msb/mycoSwarm/main/scripts/install.sh | bash
mycoswarm chat
The installer detects your OS, installs Python and Ollama if needed, pulls a model sized for your RAM, and runs hardware detection.
Manual Install
pip install mycoswarm
mycoswarm chat
Requires Ollama running with at least one model pulled.
macOS (Apple Silicon)
brew install ollama
ollama serve &
ollama pull gemma3:27b # or gemma3:4b for 8GB Macs
pip install mycoswarm
mycoswarm chat
Apple Silicon unified memory is detected automatically โ an M1 with 16GB can run 14B+ models.
Raspberry Pi
Works on Pi 2 and newer. pymupdf (PDF support) is optional โ if it fails to build on ARM, PDF reading is disabled but everything else works.
sudo apt install -y python3-venv git
git clone https://github.com/msb-msb/mycoSwarm.git
cd mycoSwarm
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
mycoswarm detect
Pi nodes can't run inference (no GPU, limited RAM) but contribute as web search workers, file processors, and coordinators.
Growing the Swarm
Single-node mode works out of the box. When you're ready for more:
Start the Daemon
mycoswarm daemon
Or install as a service (Linux):
sudo cp scripts/mycoswarm.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now mycoswarm
Add Another Machine
Install mycoSwarm on the second machine, start the daemon. That's it. mDNS handles discovery โ no IP addresses to configure, no config files to edit. Within seconds:
mycoswarm swarm
Shows both nodes, their capabilities, and available models.
How Routing Works
The orchestrator scores each node for each task type:
- Inference โ GPU nodes (highest VRAM wins)
- Web search / file processing โ CPU workers (distributed round-robin)
- Embeddings โ Nodes running Ollama with embedding models
- Code execution โ CPU workers (sandboxed subprocess)
Tasks go to the best available node. If that node fails, the orchestrator retries on the next candidate. Executive (GPU) nodes are reserved for inference โ they won't waste cycles on web searches when CPU workers are available.
CLI Commands
| Command | What It Does |
|---|---|
mycoswarm chat |
Interactive chat with memory, tools, and document search |
mycoswarm ask "prompt" |
Single question, streamed response |
mycoswarm research "topic" |
Parallel web search โ synthesized answer with citations |
mycoswarm rag "question" |
Answer from your indexed documents |
mycoswarm search "query" |
Raw web search results |
mycoswarm library ingest [path] |
Index files for document search |
mycoswarm library list |
Show indexed documents |
mycoswarm detect |
Show hardware and capabilities |
mycoswarm swarm |
Swarm overview โ all nodes and status |
mycoswarm models |
All models across the swarm |
mycoswarm plugins |
Installed plugins |
mycoswarm memory |
View and manage stored facts |
mycoswarm daemon |
Start the swarm daemon |
Chat Slash Commands
| Command | What It Does |
|---|---|
/remember <fact> |
Store a persistent fact |
/memories |
Show all stored facts |
/forget <n> |
Remove a fact by number |
/rag <question> |
Search documents and answer |
/library |
Show indexed documents |
/auto |
Toggle agentic tool routing on/off |
/identity |
View name, origin, substrate |
/name <n> |
Set or change AI name |
/vitals |
Detailed 8 C's breakdown with bar charts |
/timing |
Current timing gate state and reasons |
/stale |
Show facts approaching decay threshold |
/procedure |
View stored wisdom procedures |
/model |
Switch model |
/clear |
Reset conversation |
/quit |
Save session and exit |
Architecture
src/mycoswarm/
โโโ hardware.py # GPU/CPU/RAM/disk/Ollama detection (Linux, macOS, ARM)
โโโ capabilities.py # Node classification โ tiers, capabilities, model limits
โโโ node.py # Persistent node identity (UUID survives restarts)
โโโ discovery.py # mDNS auto-discovery, peer health tracking
โโโ api.py # FastAPI service โ health, status, peers, tasks, SSE streaming
โโโ daemon.py # Main daemon โ detection + discovery + API + worker + orchestrator
โโโ worker.py # Task handlers โ inference, search, embedding, files, code, translate
โโโ orchestrator.py # Task routing โ scoring, retry, load balancing, inflight tracking
โโโ plugins.py # Plugin loader โ scan ~/.config/mycoswarm/plugins/
โโโ solo.py # Single-node mode โ direct Ollama, agentic classification
โโโ library.py # Document library โ chunking, embeddings, ChromaDB, RAG
โโโ memory.py # Persistent memory โ facts, session summaries, prompt injection
โโโ identity.py # Persistent self-model โ name, origin, development stage
โโโ timing.py # Wu Wei Timing Gate โ PROCEED/GENTLE/DEEP calibration
โโโ vitals.py # 8 C's vital signs โ self-awareness from pipeline signals
โโโ cli.py # All CLI commands and interactive chat
Node Tiers
| Tier | Example Hardware | Role |
|---|---|---|
| EXECUTIVE | RTX 3090 workstation | GPU inference, orchestration |
| SPECIALIST | RTX 3060 desktop | GPU inference (smaller models) |
| LIGHT | Lenovo M710Q, Raspberry Pi | Web search, file processing, coordination |
| WORKER | Any CPU-only machine | Distributed task execution |
Discovery
Nodes broadcast via mDNS (_mycoswarm._tcp.local.). No central server, no configuration. Plug in a machine, start the daemon, the swarm grows.
Task Flow
User asks question on Node A
โ Node A checks: can I handle this locally?
โ Yes: execute locally
โ No: orchestrator scores all peers
โ Dispatch to best peer
โ Stream response back to Node A
Plugins
Extend the swarm without touching core code. Drop a directory into ~/.config/mycoswarm/plugins/:
~/.config/mycoswarm/plugins/
โโโ my_summarizer/
โโโ plugin.yaml
โโโ handler.py
plugin.yaml:
name: my_summarizer
task_type: summarize
description: Summarize text by extracting key points
capabilities: cpu_worker
handler.py:
async def handle(task):
text = task.payload.get("text", "")
# Your logic here
return {"summary": summarized_text}
Restart the daemon. The node advertises the new capability. Other nodes can route summarize tasks to it.
Document Library
Drop files into ~/mycoswarm-docs/ and index them:
mycoswarm library ingest
Supports: PDF, Markdown, TXT, HTML, CSV, JSON.
Files are chunked, embedded (via Ollama), and stored in ChromaDB. Ask questions:
mycoswarm rag "what does the architecture section describe?"
Or use /rag in chat for inline document search.
The Manifesto
Named after mycelium โ the underground network connecting a forest. It doesn't centralize. It finds what's available and connects it.
If a student in Lagos with two old laptops can't participate, the framework has failed.
No cloud dependencies. No API keys. No expensive hardware requirements. Every node counts.
What's Next
- Identity development โ Monica grows through interaction, not just configuration
- Swarm identity sync โ Consistent self-model across all nodes
- Agentic timing gate โ SUPPRESS/DEFER/PROCEED for proactive actions
- Agentic planner โ LLM generates multi-step plans and executes them across the swarm
- mTLS security โ Encrypted, authenticated inter-node communication
- Config files โ
~/.config/mycoswarm/config.tomlfor persistent settings - Mesh networking โ Connect swarms across the internet via VPN
Contributing
mycoSwarm is MIT licensed. Contributions welcome.
git clone https://github.com/msb-msb/mycoSwarm.git
cd mycoSwarm
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
python -m pytest tests/ -v # 398 tests, all offline
v0.2.9 | 398 tests | 5 nodes โ Built with experience, not hype. InsiderLLM
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mycoswarm-0.2.16.tar.gz.
File metadata
- Download URL: mycoswarm-0.2.16.tar.gz
- Upload date:
- Size: 176.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5604a403403b46bbb185f5df20ee61d4c8fc386e835df280c63957ca4b428955
|
|
| MD5 |
719acdc85b3b661cb559c2eb83fefac4
|
|
| BLAKE2b-256 |
0ea2912054e99dc953dc3d0ac60071dc2298fc5066134f8a6c378c810041e5d0
|
File details
Details for the file mycoswarm-0.2.16-py3-none-any.whl.
File metadata
- Download URL: mycoswarm-0.2.16-py3-none-any.whl
- Upload date:
- Size: 134.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aafd6e61115aa40485d7d64d2fa8bcaba641b35b8bef1616101d649584e8c8f8
|
|
| MD5 |
8d97bc7a55f447aebb2601a815222696
|
|
| BLAKE2b-256 |
90b6895b29cd777846ce8b9f1f86c3e19e40d87ab6159af3446806abb3fc5f5b
|