A lightweight, memory-first, Nostr-primary personal AI assistant
Project description
๐ฆ HermitCrab
Your local, private AI companion that actually remembers โ and gets better over time
๐ค Acknowledgments
HermitCrab is a fork of nanobot by HKUDS.
We stand on the shoulders of giants:
- Original nanobot architecture ยฉ HKUDS (MIT License)
- Inspired by OpenClaw
Thank you to the nanobot team for creating the foundation that made HermitCrab possible.
What is HermitCrab, really?
HermitCrab is a personal AI agent you run on your own machine.
Itโs not another cloud wrapper, not a bloated framework, not yet another SaaS subscription trap.
Itโs small (under 6,000 lines of core code), readable, auditable, and built around one simple idea:
Your AI should remember what matters to you โ forever โ without turning into a black box.
Think of it as a second brain you can carry in your pocket (or copy to a new laptop/VPS in seconds).
Just move the workspace/ folder and youโre back in business โ same memories, same personality, same progress.
Why people may be drawn to it
- Supports fully offline operation with local models (Ollama via LiteLLM)
- Remembers things in plain, human-readable Markdown files (Obsidian compatible, git-friendly)
- Can distill conversations into facts, tasks, decisions, goals, and reflections when that optional background pass is enabled
- Reflects on itself โ spots patterns, mistakes, contradictions, and suggests improvements
- Talks via Nostr (primary), Telegram, email, or plain CLI โ your choice
- Stays tiny, fast, and cheap โ no 100k+ line monolith
Same crab, new shell.
Move your workspace anywhere. The agent picks up exactly where it left off.
Quick Start (3 commands)
-
Install
pip install hermitcrab-ai
-
Set up your workspace & config
hermitcrab onboard(creates
~/.hermitcrab/with config and empty workspace) -
Pick a model & run
Option A: Local Ollama (recommended for privacy & free)
a. Install Ollama:
# macOS brew install ollama # Linux curl -fsSL https://ollama.com/install.sh | sh # Start Ollama (runs in background) ollama serve
b. Pull a model:
ollama pull lfm2.5-thinking:latest # Fast thinking model # Or: ollama pull llama3.1:8b # General purpose # Or: ollama pull qwen2.5-coder:7b # Coding specialist
c. Edit
~/.hermitcrab/config.json:{ "providers": { "openai": { "apiKey": "ollama", "apiBase": "http://localhost:11434/v1" } }, "models": { "main": { "model": "openai/lfm2.5-thinking:latest" }, "localCoder": { "model": "ollama/qwen2.5-coder:7b" } }, "agents": { "modelAliases": { "coder": "localCoder" }, "defaults": { "model": "main", "jobModels": { "subagent": "localCoder" } } } }
Advanced local Ollama example with named models, cloud-routed models, and optional shorthand aliases:
{ "providers": { "openai": { "apiKey": "ollama", "apiBase": "http://localhost:11434/v1" } }, "models": { "main": { "model": "openai/kimi-k2.5:cloud" }, "coder": { "model": "ollama/qwen3.5:4b" }, "fast": { "model": "openai/lfm2.5-thinking:latest", "reasoningEffort": "medium" } }, "agents": { "modelAliases": { "code": "coder" }, "defaults": { "model": "main", "jobModels": { "subagent": "coder", "reflection": "fast", "reasoningEffort": "medium" } } } }
Notes:
- For Ollama, prefer the
openaiprovider pointed athttp://localhost:11434/v1. In practice this has much better tool-calling reliability than LiteLLM's nativeollamaroute. - The
ollamaprovider is still available, but it currently has weaker tool coverage and more provider-specific tool-call quirks. - Keep the model name exactly as Ollama exposes it when using the OpenAI-compatible route.
- Prefer the top-level
modelssection as the canonical place for model definitions. - Ollama's OpenAI-compatible
/v1route does not currently support per-request context-size overrides; useOLLAMA_CONTEXT_LENGTHor custom Modelfile-based models when you need a larger local context window. agents.modelAliasesis optional shorthand for runtime ergonomics; it is not required if your named model keys are already concise.- Subagents can use named models directly, or aliases when you want shorter operator-facing names.
Option B: Cloud model (OpenRouter)
# Get API key at https://openrouter.ai/keysEdit
~/.hermitcrab/config.json:{ "providers": { "openrouter": { "apiKey": "sk-or-..." } }, "agents": { "defaults": { "model": "anthropic/claude-sonnet-4" } } }
Then run:
hermitcrab agentNotes:
- OpenRouter should be configured under
providers.openrouter, notproviders.custom. - Recommended model forms are
anthropic/...,openai/...,google/..., and similar upstream model IDs. openrouter/anthropic/...also works if you want to be explicit.- If OpenRouter is your only configured provider, HermitCrab will still route the default
anthropic/claude-opus-4-5model through OpenRouter.
- For Ollama, prefer the
You're now talking to your own persistent, memory-aware agent.
How the agent actually thinks & remembers
HermitCrab is not a stateless chat loop.
Every session follows a clean lifecycle:
- You talk โ agent responds โ tools run if needed
- Session ends (you exit, or 30 min of silence)
- Journal synthesis โ narrative summary of what happened (cheap model)
- Optional distillation โ proposes fallback facts, tasks, goals, and decisions when enabled
- Reflection โ looks for mistakes, contradictions, patterns (smarter model)
- Scratchpad archival โ per-session transient notes are archived on session end
All extracted knowledge lands as tiny, atomic Markdown notes in workspace/memory/:
workspace/
โโโ memory/
โ โโโ facts/ # preferences, hard truths
โ โโโ decisions/ # choices & reasoning (immutable)
โ โโโ goals/ # long-term objectives
โ โโโ tasks/ # things to do (with deadlines & status)
โ โโโ reflections/ # self-analysis, cleanups
โโโ knowledge/ # reference library (articles, docs, notes)
โโโ journal/ # narrative session summaries
โโโ scratchpads/ # per-session transient working notes
โโโ sessions/ # raw chat logs (for debugging)
Everything is:
- Human-readable & editable (open in Obsidian, Vim, Notepad)
- Structured with YAML frontmatter
- Wikilink-friendly
- Deterministic โ Python, not the LLM, writes the files
No vector databases. No silent embeddings. No hidden state corruption.
Distillation is conservative and optional by design. Explicit memory writes remain authoritative.
Scratchpad and channel prompts
- Every session has a dedicated scratchpad file at
workspace/scratchpads/<session>.md. - Scratchpad is transient by design: it is archived to
workspace/scratchpads/archive/on session end. - Scratchpad traces are excluded from distillation so transient reasoning doesn't pollute long-term memory.
- Optional per-channel prompt overlays:
workspace/prompts/<channel>.mdworkspace/prompts/<channel>/<chat_id>.md
Channels โ where you talk to your crab
- Nostr (default / primary) โ encrypted DMs (NIP-04 + NIP-17 groups coming)
- Telegram โ classic bot
- Email โ IMAP/SMTP
- CLI โ quick local chats
All channels feed into the same memory & reflection engine.
Tools โ what the agent can actually do
| Tool | What it does |
|---|---|
| read_file | Peek at files in workspace |
| write_file | Create / overwrite files |
| edit_file | Precise replacements |
| list_dir | Browse directories |
| exec | Run safe shell commands |
| web_search | DuckDuckGo search (no API key needed) |
| web_fetch | Fetch & extract URL content (sanitized) |
| knowledge_search | Search your knowledge library |
| knowledge_ingest | Save articles/docs to library |
| message | Reply to you on the active channel |
| spawn | Launch sub-agents |
| cron | Schedule recurring jobs |
Security: Web content is automatically sanitized to remove prompt injection attacks, hidden instructions, and encoded payloads.
Execution is always gated by Python โ the LLM can only propose.
Self-Improvement โ the part that actually matters
HermitCrab gets smarter over time by:
- Distilling conversations โ new facts/tasks/goals/reflections
- Reflecting on patterns โ mistakes, contradictions, model misbehavior
- Routing jobs to the right model:
- Interactive replies โ strong model (Claude, GPT-4o, etc.)
- Journal + distillation โ cheap local (Llama 3.2 3B, Phi-3-mini)
- Reflection โ medium model
This keeps costs low while letting the agent learn without constant supervision.
Subagents and models
HermitCrab can delegate longer-running or specialized work to subagents while the main agent stays responsive.
- Define reusable models in top-level
models - Set a dedicated subagent model in
agents.defaults.jobModels.subagent - Optionally add short aliases in
agents.modelAliasesfor runtime convenience - The agent can use either named models or aliases when spawning delegated work
Example use cases:
- "Build a simple website for X, use the coder subagent"
- "Investigate this bug in the background and report back"
Architecture at a glance
Total core agent code: 6,927 lines (run ./core_agent_lines.sh to verify).
hermitcrab/
โโโ agent/ # loop, tools, memory handling
โโโ channels/ # Nostr, Telegram, email, CLI
โโโ providers/ # LLM abstraction (litellm + fallbacks)
โโโ config/ # typed config loading
โโโ cli/ # typer-based interface
โโโ utils/ # helpers
Design rules we live by:
- Python is the source of truth โ LLM is untrusted
- Memory is deterministic & auditable
- Local-first by default
- Small enough to read in a weekend
- Hackable, understandable
Runtime safety defaults
Production-minded defaults are in hermitcrab/config/schema.py and are written into ~/.hermitcrab/config.json on hermitcrab onboard.
- LLM retries with exponential backoff
- Max response loop time cap
- Repeated tool-cycle detection (loop break)
- Bounded memory context injection
- Reflection auto-promotion disabled by default (safer file integrity)
Comparison โ why this feels different
| Aspect | HermitCrab | Typical AI Framework / Chatbot |
|---|---|---|
| Core code size | ~6k lines | 50kโ300k+ lines |
| Memory | Atomic Markdown | Vector DB or forgotten |
| Portability | Copy workspace โ works | Cloud account locked |
| Transparency | Fully auditable | Opaque internals |
| Cost | Local models cheap | API calls add up fast |
| Self-improvement | Built-in distillation & reflection | Rare or manual |
Roadmap (where we're going)
Done
- Atomic memory system
- Journal + distillation
- Reflection basics
- Nostr integration
- Local-first deployment
In progress
- Observability / metrics
- Full integration tests
Planned
- Journal search
- Backup & migration helpers
- Optional health-check endpoint
- Web chat companion (static HTML + Nostr)
Why I built this
Most AI tools today are:
- Tied to someone elseโs cloud
- Forget everything after 4k tokens
- Impossible to truly understand or audit
- Expensive to run 24/7
HermitCrab exists to prove a quieter truth:
A personal AI can be small, local, private, deterministic, and still grow with you โ without turning into a 200k-line monster or a subscription bill.
Keep it yours. Keep it local. Keep it simple. ๐ฆ
Get started
pip install hermitcrab-ai
hermitcrab onboard
hermitcrab gateway
Welcome to your own second brain. Let's make it remember everything that matters.
Docker
Dockerfile and docker-compose.yml build/run HermitCrab directly.
- Build:
docker compose build - Run gateway:
docker compose up -d hermitcrab-gateway - Persisted data lives at
~/.hermitcrab(mounted into container).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hermitcrab_ai-0.1.0a4-py3-none-any.whl.
File metadata
- Download URL: hermitcrab_ai-0.1.0a4-py3-none-any.whl
- Upload date:
- Size: 214.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
91e9ce97036f15a8b5e95f89f94077b9fea5bb5bde1247860a1c66587e04a075
|
|
| MD5 |
66119b22f51e4ac3d17b98851080807f
|
|
| BLAKE2b-256 |
19c7124a54af903629a3b041f51b1de636f165d02af115b47e72d45c92befd06
|