Skip to main content

Local-first long-term memory for autonomous agents. Wiki knowledge graph, surprise scoring, identity-level consolidation.

Project description

๐ŸŒฑ MindGardener

Your agents forget everything. This fixes it.

pip install mindgardener
garden init

That's it. Your agent now has persistent memory. No database. No server. No Docker. Just files.


The Problem

Every AI agent wakes up with amnesia. You talked for two hours about your job search, your projects, your contacts โ€” next session, gone.

Current solutions all require infrastructure you don't want to maintain:

Tool You need to run
Mem0 Neo4j + Qdrant
Letta (MemGPT) Cloud server + account
Zep / Graphiti Postgres
LangMem Postgres
MindGardener Nothing

The Fix

MindGardener reads your agent's conversation logs and builds a personal wiki โ€” one markdown file per person, project, and event. It decides what's worth remembering using surprise scoring (prediction error), not "rate importance 1-10."

Your agent's memory is just a folder of files. grep it. git diff it. Open it in Obsidian. Back it up with cp.


What You Get

After a month, your agent has:

  • 30โ€“80 entity files โ€” one per person, company, project (memory/entities/Kadoa.md)
  • A knowledge graph โ€” [[wikilinks]] + triplets, no database needed
  • Curated long-term memory โ€” only the surprising stuff survives
  • Token-budget retrieval โ€” garden context "topic" --budget 4000 loads exactly what fits
  • Identity model โ€” tracks who your agent thinks you are and updates when beliefs shift

Quick Start

pip install mindgardener
garden init                              # Set up workspace
garden extract --input memory/today.md   # Build entity wiki from logs
garden context "job search" --budget 4000 # Get relevant memory, within budget

For fully local (no API key): garden init --provider ollama

The Nightly Sleep Cycle

Run this on a cron (or manually). It's your agent's equivalent of sleep:

garden extract    # Read today's logs โ†’ create/update entity wiki pages
garden surprise   # Score events by prediction error (what was unexpected?)
garden consolidate # Promote high-surprise events to MEMORY.md
garden beliefs --drift --apply  # Update identity model if beliefs shifted
garden prune --days 30          # Archive entities inactive >30 days

Retrieval (no LLM needed)

garden recall "Kadoa"                     # Search entities + graph
garden context "job search" --budget 4000  # Token-budget assembly
garden evaluate --text "Agent said X"      # Fact-check against knowledge graph
garden beliefs                             # View identity model

How Memory Actually Works

1. Entity Extraction

garden extract reads a daily log and creates one .md file per entity:

# Kadoa
**Type:** company

## Facts
- AI web scraping startup (YC W24)

## Timeline
### [[2026-02-16]]
- [[Marcus]] received reply from [[Adrian Krebs]] after [[HN]] outreach
- [[Revenue Hunter]] sent cold email to adrian@kadoa.com

Each [[wikilink]] is an edge in the knowledge graph. The graph emerges from the text โ€” no schema, no migration.

2. Surprise Scoring

Not all memories are equal. MindGardener uses prediction error to score importance:

  1. Read the agent's current world model (MEMORY.md)
  2. Predict what should have happened today
  3. Compare prediction against what actually happened
  4. Score the delta: high surprise โ†’ important, low surprise โ†’ routine

This is how biological memory works โ€” you remember the unexpected, not the routine. Ported from SOAR's impasse-driven chunking (Laird, 2012) to LLM agents.

3. Context Assembly (v2)

garden context solves the "load everything" problem. Instead of dumping all memory into context, it:

  1. Scores all entities against your query (fuzzy matching, Levenshtein, initials)
  2. Follows [[wikilinks]] โ€” 1-hop graph traversal to find related entities
  3. Includes matching graph triplets
  4. Adds relevant lines from recent daily logs
  5. Includes MEMORY.md excerpts
  6. All within a token budget โ€” 4000 tokens? Only the most relevant. 500? Even more selective.

Every assembly is logged with a manifest โ€” you can audit exactly what your agent knew (or didn't know) at any point:

{
  "query": "Kadoa",
  "token_budget": 4000,
  "tokens_used": 1847,
  "utilization": 0.46,
  "loaded_count": 7,
  "skipped_count": 2,
  "skipped_reasons": ["token_budget_exceeded"]
}

All 13 Commands

Command What it does LLM? Cost
garden init Set up workspace No Free
garden extract Daily log โ†’ entity wiki + graph Yes ~$0.001
garden surprise Score events by prediction error Yes ~$0.002
garden consolidate Promote high-surprise โ†’ MEMORY.md Yes ~$0.001
garden recall "q" Search entities + graph No Free
garden context "q" Token-budget context assembly No Free
garden entities List all known entities No Free
garden prune Archive inactive entities No Free
garden merge "a" "b" Merge duplicate entities No Free
garden fix type "X" "t" Fix entity type mistakes No Free
garden reindex Rebuild graph from entity files No Free
garden viz Mermaid graph visualization No Free
garden stats Quick overview No Free

Only 3 commands call an LLM. The other 10 are pure file operations.


LLM Providers

MindGardener works with any LLM. Configure in garden.yaml:

extraction:
  provider: google       # Google Gemini (free tier: 1500 req/day)
  model: gemini-2.0-flash
Provider Config Cost
Google Gemini provider: google Free tier available
OpenAI provider: openai From $0.15/1M tokens
Anthropic provider: anthropic From $0.25/1M tokens
Ollama (local) provider: ollama Free
Any OpenAI-compatible provider: compatible + base_url Varies

Daily cost running full nightly cycle: ~$0.004/day with Gemini Flash. ~$0.12/month. $0 with Ollama.


Configuration

# garden.yaml
workspace: /path/to/workspace
memory_dir: memory/
entities_dir: memory/entities/
graph_file: memory/graph.jsonl
long_term_memory: MEMORY.md

extraction:
  provider: google
  model: gemini-2.0-flash

consolidation:
  surprise_threshold: 0.5   # Min score to promote
  decay_days: 30             # Archive after N days inactive

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Daily Logs  โ”‚โ”€โ”€โ”€โ”€โ–ถโ”‚   Extractor   โ”‚โ”€โ”€โ”€โ”€โ–ถโ”‚  Entity Pages   โ”‚
โ”‚  (episodic)  โ”‚     โ”‚  (LLM call)   โ”‚     โ”‚  (semantic wiki) โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                           โ”‚                       โ”‚
                           โ–ผ                       โ–ผ
                    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                    โ”‚  Graph Store  โ”‚     โ”‚ Surprise Scorer  โ”‚
                    โ”‚  (triplets)   โ”‚     โ”‚ (prediction err) โ”‚
                    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                                   โ”‚
                                                   โ–ผ
                                          โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                                          โ”‚  Consolidator    โ”‚
                                          โ”‚ (โ†’ MEMORY.md)    โ”‚
                                          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                                                   โ”‚
                                                   โ–ผ
                                          โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                                          โ”‚ Context Assembly  โ”‚
                                          โ”‚ (budget-aware)    โ”‚
                                          โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Comparison

MindGardener Mem0 Letta Zep/Graphiti Cognee
Infrastructure None Neo4j + Qdrant Cloud server Postgres Heavy
Storage format Markdown Opaque Opaque Opaque Opaque
Human-readable Yes No No No No
Knowledge graph Wikilinks + JSONL Neo4j No Graph DB Graph
Surprise scoring Yes No No No No
Token-budget retrieval Yes No No No No
Context manifests Yes No No No No
Manual editing Any editor No /remember No No
Browse in Obsidian Yes No No No No
Offline capable Yes (Ollama) No No No No
Framework lock-in None Mem0 SDK Letta SDK Zep SDK Cognee SDK
Install pip install Docker + DBs Cloud signup Docker + DB pip + deps

Dependencies

  • Python 3.10+
  • PyYAML
  • An LLM provider

That's it. No numpy. No torch. No vector database. No Docker.

Install size: <500KB.


Testing

$ python -m pytest tests/ -q
120 passed in 2.34s

120 tests. All run in <3 seconds. No network calls (all mocked).


File Structure

your-workspace/
โ”œโ”€โ”€ garden.yaml                      # Config
โ”œโ”€โ”€ MEMORY.md                        # Long-term curated memory
โ””โ”€โ”€ memory/
    โ”œโ”€โ”€ 2026-02-17.md                # Daily log (episodic)
    โ”œโ”€โ”€ 2026-02-16.md
    โ”œโ”€โ”€ graph.jsonl                   # Knowledge graph triplets
    โ”œโ”€โ”€ surprise-scores.jsonl         # What was unexpected
    โ”œโ”€โ”€ context-manifests.jsonl       # Audit trail
    โ””โ”€โ”€ entities/
        โ”œโ”€โ”€ Marcus.md                # Person
        โ”œโ”€โ”€ Kadoa.md                 # Company
        โ”œโ”€โ”€ MindGardener.md          # Project
        โ””โ”€โ”€ Adrian-Krebs.md          # Person

Everything is a text file. Everything is grep-able. Everything is git-able.


Multi-Agent Support

Multiple agents can share the same entity directory. Each contributes observations; all benefit from combined knowledge. Use symlinks or shared directories โ€” no coordination server needed.


Research Background

MindGardener draws from cognitive science research on memory:

  • Tulving (1972) โ€” Episodic vs semantic memory distinction
  • SOAR (Laird, 2012) โ€” Impasse-driven chunking for procedural learning
  • Generative Agents (Park et al., 2023) โ€” Reflection-based agent memory
  • CoALA (Sumers et al., 2023) โ€” Formal taxonomy of agent memory architectures
  • MemGPT (Packer et al., 2023) โ€” OS-inspired hierarchical memory management
  • Everything is Context (Xu et al., 2025) โ€” Filesystem abstraction for context engineering

Novel contribution: Surprise-based consolidation using prediction error, and token-budget-aware context assembly with audit manifests.


Roadmap

  • Entity extraction from markdown logs
  • Wiki-style pages with [[wikilinks]]
  • Knowledge graph (JSONL triplets)
  • Surprise scoring (prediction error)
  • Token-budget-aware context assembly
  • Context manifests (audit trail)
  • Multi-provider LLM support (5 providers)
  • Multi-agent shared brain
  • 120 tests
  • Concurrency safety (file locks)
  • Optional embedding plugin
  • Incremental indexing
  • Background daemon mode
  • Context evaluator (fact-checking loop)
  • pip package on PyPI

License

MIT

Credits

Built by the Swarm โ€” a team of autonomous AI agents coordinating via Discord.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mindgardener-1.0.0.tar.gz (69.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mindgardener-1.0.0-py3-none-any.whl (60.5 kB view details)

Uploaded Python 3

File details

Details for the file mindgardener-1.0.0.tar.gz.

File metadata

  • Download URL: mindgardener-1.0.0.tar.gz
  • Upload date:
  • Size: 69.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for mindgardener-1.0.0.tar.gz
Algorithm Hash digest
SHA256 59820a7d3e9c11077d0da5871870dd98a383ee5f2f402ab26f17f386c60f7ad1
MD5 960bd2f2ce7484d19646d3e84e4b317c
BLAKE2b-256 e7dd874a584773cea3f72775cc8ec5ea506465bfb57cf2fc6fdce800a00e5b5b

See more details on using hashes here.

File details

Details for the file mindgardener-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: mindgardener-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 60.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for mindgardener-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 04befba60b82c8817b9435f156627f8d588a8098ddae66455d5f1e18fd4a409c
MD5 68ec002773b7c8d4dc3a181bed004ba1
BLAKE2b-256 dd6b89110bd5ebe0a61539ed0590b44a92066cc55a945efe05d429060ae7c19a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page