Skip to main content

A lightweight, memory-first, Nostr-primary personal AI assistant

Project description

๐Ÿฆ€ HermitCrab

Your local, private AI companion that actually remembers โ€” and gets better over time

PyPI version Python โ‰ฅ3.11 MIT License

๐Ÿค Acknowledgments

HermitCrab is a fork of nanobot by HKUDS.

We stand on the shoulders of giants:

  • Original nanobot architecture ยฉ HKUDS (MIT License)
  • Inspired by OpenClaw

Thank you to the nanobot team for creating the foundation that made HermitCrab possible.

What is HermitCrab, really?

HermitCrab is a personal AI agent you run on your own machine.
Itโ€™s not another cloud wrapper, not a bloated framework, not yet another SaaS subscription trap.

Itโ€™s small (under 6,000 lines of core code), readable, auditable, and built around one simple idea:
Your AI should remember what matters to you โ€” forever โ€” without turning into a black box.

Think of it as a second brain you can carry in your pocket (or copy to a new laptop/VPS in seconds).
Just move the workspace/ folder and youโ€™re back in business โ€” same memories, same personality, same progress.

Why people may be drawn to it

  • Supports fully offline operation with local models (Ollama via LiteLLM)
  • Remembers things in plain, human-readable Markdown files (Obsidian compatible, git-friendly)
  • Can distill conversations into facts, tasks, decisions, goals, and reflections when that optional background pass is enabled
  • Reflects on itself โ€” spots patterns, mistakes, contradictions, and suggests improvements
  • Talks via Nostr (primary), Telegram, email, or plain CLI โ€” your choice
  • Stays tiny, fast, and cheap โ€” no 100k+ line monolith

Same crab, new shell.
Move your workspace anywhere. The agent picks up exactly where it left off.

Quick Start (3 commands)

  1. Install

    pip install hermitcrab-ai
    
  2. Set up your workspace & config

    hermitcrab onboard
    

    (creates ~/.hermitcrab/ with config and empty workspace)

  3. Pick a model & run

    Option A: Local Ollama (recommended for privacy & free)

    a. Install Ollama:

    # macOS
    brew install ollama
    
    # Linux
    curl -fsSL https://ollama.com/install.sh | sh
    
    # Start Ollama (runs in background)
    ollama serve
    

    b. Pull a model:

    ollama pull lfm2.5-thinking:latest  # Fast thinking model
    # Or: ollama pull llama3.1:8b      # General purpose
    # Or: ollama pull qwen2.5-coder:7b # Coding specialist
    

    c. Edit ~/.hermitcrab/config.json:

    {
      "providers": {
        "openai": {
          "apiKey": "ollama",
          "apiBase": "http://localhost:11434/v1"
        }
      },
      "models": {
        "main": {
          "model": "openai/lfm2.5-thinking:latest"
        },
        "localCoder": {
          "model": "ollama/qwen2.5-coder:7b"
        }
      },
      "agents": {
        "modelAliases": {
          "coder": "localCoder"
        },
        "defaults": {
          "model": "main",
          "jobModels": {
            "subagent": "localCoder"
          }
        }
      }
    }
    

    Advanced local Ollama example with named models, cloud-routed models, and optional shorthand aliases:

    {
      "providers": {
        "openai": {
          "apiKey": "ollama",
          "apiBase": "http://localhost:11434/v1"
        }
      },
      "models": {
        "main": {
          "model": "openai/kimi-k2.5:cloud"
        },
        "coder": {
          "model": "ollama/qwen3.5:4b"
        },
        "fast": {
          "model": "openai/lfm2.5-thinking:latest",
          "reasoningEffort": "medium"
        }
      },
      "agents": {
        "modelAliases": {
          "code": "coder"
        },
        "defaults": {
          "model": "main",
          "jobModels": {
            "subagent": "coder",
            "reflection": "fast",
            "reasoningEffort": "medium"
          }
        }
      }
    }
    

    Notes:

    • For Ollama, prefer the openai provider pointed at http://localhost:11434/v1. In practice this has much better tool-calling reliability than LiteLLM's native ollama route.
    • The ollama provider is still available, but it currently has weaker tool coverage and more provider-specific tool-call quirks.
    • Keep the model name exactly as Ollama exposes it when using the OpenAI-compatible route.
    • Prefer the top-level models section as the canonical place for model definitions.
    • Ollama's OpenAI-compatible /v1 route does not currently support per-request context-size overrides; use OLLAMA_CONTEXT_LENGTH or custom Modelfile-based models when you need a larger local context window.
    • agents.modelAliases is optional shorthand for runtime ergonomics; it is not required if your named model keys are already concise.
    • Subagents can use named models directly, or aliases when you want shorter operator-facing names.

    Option B: Cloud model (OpenRouter)

    # Get API key at https://openrouter.ai/keys
    

    Edit ~/.hermitcrab/config.json:

    {
      "providers": {
        "openrouter": {
          "apiKey": "sk-or-..."
        }
      },
      "agents": {
        "defaults": {
          "model": "anthropic/claude-sonnet-4"
        }
      }
    }
    

    Then run:

    hermitcrab agent
    

    Notes:

    • OpenRouter should be configured under providers.openrouter, not providers.custom.
    • Recommended model forms are anthropic/..., openai/..., google/..., and similar upstream model IDs.
    • openrouter/anthropic/... also works if you want to be explicit.
    • If OpenRouter is your only configured provider, HermitCrab will still route the default anthropic/claude-opus-4-5 model through OpenRouter.

You're now talking to your own persistent, memory-aware agent.

How the agent actually thinks & remembers

HermitCrab is not a stateless chat loop.
Every session follows a clean lifecycle:

  1. You talk โ†’ agent responds โ†’ tools run if needed
  2. Session ends (you exit, or 30 min of silence)
  3. Journal synthesis โ€” narrative summary of what happened (cheap model)
  4. Optional distillation โ€” proposes fallback facts, tasks, goals, and decisions when enabled
  5. Reflection โ€” looks for mistakes, contradictions, patterns (smarter model)
  6. Scratchpad archival โ€” per-session transient notes are archived on session end

All extracted knowledge lands as tiny, atomic Markdown notes in workspace/memory/:

workspace/
โ”œโ”€โ”€ memory/
โ”‚   โ”œโ”€โ”€ facts/          # preferences, hard truths
โ”‚   โ”œโ”€โ”€ decisions/      # choices & reasoning (immutable)
โ”‚   โ”œโ”€โ”€ goals/          # long-term objectives
โ”‚   โ”œโ”€โ”€ tasks/          # things to do (with deadlines & status)
โ”‚   โ””โ”€โ”€ reflections/    # self-analysis, cleanups
โ”œโ”€โ”€ knowledge/          # reference library (articles, docs, notes)
โ”œโ”€โ”€ journal/            # narrative session summaries
โ”œโ”€โ”€ scratchpads/        # per-session transient working notes
โ””โ”€โ”€ sessions/           # raw chat logs (for debugging)

Everything is:

  • Human-readable & editable (open in Obsidian, Vim, Notepad)
  • Structured with YAML frontmatter
  • Wikilink-friendly
  • Deterministic โ€” Python, not the LLM, writes the files

No vector databases. No silent embeddings. No hidden state corruption.

Distillation is conservative and optional by design. Explicit memory writes remain authoritative.

Scratchpad and channel prompts

  • Every session has a dedicated scratchpad file at workspace/scratchpads/<session>.md.
  • Scratchpad is transient by design: it is archived to workspace/scratchpads/archive/ on session end.
  • Scratchpad traces are excluded from distillation so transient reasoning doesn't pollute long-term memory.
  • Optional per-channel prompt overlays:
    • workspace/prompts/<channel>.md
    • workspace/prompts/<channel>/<chat_id>.md

Channels โ€” where you talk to your crab

  • Nostr (default / primary) โ€” encrypted DMs (NIP-04 + NIP-17 groups coming)
  • Telegram โ€” classic bot
  • Email โ€” IMAP/SMTP
  • CLI โ€” quick local chats

All channels feed into the same memory & reflection engine.

Tools โ€” what the agent can actually do

Tool What it does
read_file Peek at files in workspace
write_file Create / overwrite files
edit_file Precise replacements
list_dir Browse directories
exec Run safe shell commands
web_search DuckDuckGo search (no API key needed)
web_fetch Fetch & extract URL content (sanitized)
knowledge_search Search your knowledge library
knowledge_ingest Save articles/docs to library
message Reply to you on the active channel
spawn Launch sub-agents
cron Schedule recurring jobs

Security: Web content is automatically sanitized to remove prompt injection attacks, hidden instructions, and encoded payloads.

Execution is always gated by Python โ€” the LLM can only propose.

Self-Improvement โ€” the part that actually matters

HermitCrab gets smarter over time by:

  • Distilling conversations โ†’ new facts/tasks/goals/reflections
  • Reflecting on patterns โ†’ mistakes, contradictions, model misbehavior
  • Routing jobs to the right model:
    • Interactive replies โ†’ strong model (Claude, GPT-4o, etc.)
    • Journal + distillation โ†’ cheap local (Llama 3.2 3B, Phi-3-mini)
    • Reflection โ†’ medium model

This keeps costs low while letting the agent learn without constant supervision.

Subagents and models

HermitCrab can delegate longer-running or specialized work to subagents while the main agent stays responsive.

  • Define reusable models in top-level models
  • Set a dedicated subagent model in agents.defaults.jobModels.subagent
  • Optionally add short aliases in agents.modelAliases for runtime convenience
  • The agent can use either named models or aliases when spawning delegated work

Example use cases:

  • "Build a simple website for X, use the coder subagent"
  • "Investigate this bug in the background and report back"

Architecture at a glance

Total core agent code: 6,927 lines (run ./core_agent_lines.sh to verify).

hermitcrab/
โ”œโ”€โ”€ agent/         # loop, tools, memory handling
โ”œโ”€โ”€ channels/      # Nostr, Telegram, email, CLI
โ”œโ”€โ”€ providers/     # LLM abstraction (litellm + fallbacks)
โ”œโ”€โ”€ config/        # typed config loading
โ”œโ”€โ”€ cli/           # typer-based interface
โ””โ”€โ”€ utils/         # helpers

Design rules we live by:

  • Python is the source of truth โ€” LLM is untrusted
  • Memory is deterministic & auditable
  • Local-first by default
  • Small enough to read in a weekend
  • Hackable, understandable

Runtime safety defaults

Production-minded defaults are in hermitcrab/config/schema.py and are written into ~/.hermitcrab/config.json on hermitcrab onboard.

  • LLM retries with exponential backoff
  • Max response loop time cap
  • Repeated tool-cycle detection (loop break)
  • Bounded memory context injection
  • Reflection auto-promotion disabled by default (safer file integrity)

Comparison โ€” why this feels different

Aspect HermitCrab Typical AI Framework / Chatbot
Core code size ~6k lines 50kโ€“300k+ lines
Memory Atomic Markdown Vector DB or forgotten
Portability Copy workspace โ†’ works Cloud account locked
Transparency Fully auditable Opaque internals
Cost Local models cheap API calls add up fast
Self-improvement Built-in distillation & reflection Rare or manual

Roadmap (where we're going)

Done

  • Atomic memory system
  • Journal + distillation
  • Reflection basics
  • Nostr integration
  • Local-first deployment

In progress

  • Observability / metrics
  • Full integration tests

Planned

  • Journal search
  • Backup & migration helpers
  • Optional health-check endpoint
  • Web chat companion (static HTML + Nostr)

Why I built this

Most AI tools today are:

  • Tied to someone elseโ€™s cloud
  • Forget everything after 4k tokens
  • Impossible to truly understand or audit
  • Expensive to run 24/7

HermitCrab exists to prove a quieter truth:

A personal AI can be small, local, private, deterministic, and still grow with you โ€” without turning into a 200k-line monster or a subscription bill.

Keep it yours. Keep it local. Keep it simple. ๐Ÿฆ€

Get started

pip install hermitcrab-ai
hermitcrab onboard
hermitcrab gateway

Welcome to your own second brain. Let's make it remember everything that matters.

Docker

Dockerfile and docker-compose.yml build/run HermitCrab directly.

  • Build: docker compose build
  • Run gateway: docker compose up -d hermitcrab-gateway
  • Persisted data lives at ~/.hermitcrab (mounted into container).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hermitcrab_ai-0.1.0a3-py3-none-any.whl (181.6 kB view details)

Uploaded Python 3

File details

Details for the file hermitcrab_ai-0.1.0a3-py3-none-any.whl.

File metadata

File hashes

Hashes for hermitcrab_ai-0.1.0a3-py3-none-any.whl
Algorithm Hash digest
SHA256 32c7a58861a52e5b77fd2bc76fae1142383220405fca055ba3868aca7a5b9bf8
MD5 095b8934c673e912938b6a7bff54714e
BLAKE2b-256 dd7177313c240b5dc0fd79049b88b5ce8cbf5a3f49ee2ce40912519a5b96b8bb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page