Local-first long-term memory for autonomous agents. Wiki knowledge graph, surprise scoring, identity-level consolidation.
Project description
๐ฑ MindGardener
Your agents forget everything. This fixes it.
Built for OpenClaw. Complements the built-in memory_search tool.
pip install mindgardener
garden init
That's it. Your agent now has persistent memory. No database. No server. No Docker. Just files.
Status: v1.1 โ Running in production. Built for multi-agent swarms.
v1.1 Features (2026-03-06)
- ๐ Provenance tracking โ know where every fact came from
- โ๏ธ Conflict detection โ flags when new info contradicts old
- ๐ Auto-injection โ context ready at session start
- โฐ Temporal decay โ old facts fade unless reinforced
- ๐ Concurrency โ file locks for multi-agent safety
- ๐ฎ Associative recall โ follow wikilinks + graph traversal
- ๐ Confidence levels โ not all facts are equally reliable
- ๐ค Multi-agent sync โ merge per-agent memories to shared
How It Complements OpenClaw
OpenClaw has built-in memory_search โ great for finding things in your MEMORY.md. But who writes that memory? Who decides what's worth remembering?
| OpenClaw built-in | MindGardener adds |
|---|---|
| Search existing memory | Create memory from conversations |
| Manual MEMORY.md edits | Auto-extract entities โ wiki pages |
| Flat text search | Knowledge graph (triplets + wikilinks) |
| โ | Surprise scoring (unexpected = important) |
| โ | Conflict detection (new info vs old) |
| โ | Identity tracking (belief drift) |
| โ | Multi-agent sync |
Together: MindGardener builds the memory. OpenClaw's memory_search finds it.
The Problem
Every AI agent wakes up with amnesia. You talked for two hours about your job search, your projects, your contacts โ next session, gone.
Current solutions all require infrastructure you don't want to maintain:
| Tool | You need to run |
|---|---|
| Mem0 | Neo4j + Qdrant |
| Letta (MemGPT) | Cloud server + account |
| Zep / Graphiti | Postgres |
| LangMem | Postgres |
| MindGardener | Nothing |
The Fix
MindGardener reads your agent's conversation logs and builds a personal wiki โ one markdown file per person, project, and event. It decides what's worth remembering using surprise scoring (prediction error), not "rate importance 1-10."
Your agent's memory is just a folder of files. grep it. git diff it. Open it in Obsidian. Back it up with cp.
What You Get
After a month, your agent has:
- 30โ80 entity files โ one per person, company, project (
memory/entities/Acme.md) - A knowledge graph โ
[[wikilinks]]+ triplets, no database needed - Curated long-term memory โ only the surprising stuff survives
- Token-budget retrieval โ
garden context "topic" --budget 4000loads exactly what fits - Identity model โ tracks who your agent thinks you are and updates when beliefs shift
Quick Start
pip install mindgardener
garden init # Set up workspace
garden extract --input memory/today.md # Build entity wiki from logs
garden context "job search" --budget 4000 # Get relevant memory, within budget
For fully local (no API key): garden init --provider ollama
For OpenClaw Users
If you're running OpenClaw, add MindGardener as a skill:
# In your workspace
pip install mindgardener
garden init
Then add to your agent's nightly cron or BOOTSTRAP.md:
# Nightly maintenance (add to your cron)
garden extract && garden surprise && garden consolidate
# Session start (add to BOOTSTRAP.md or heartbeat)
garden inject --output RECALL-CONTEXT.md
Your agent will now:
- Build memory from daily conversation logs
- Score events by surprise (unexpected = important)
- Generate relevant context at session start
- Track conflicts when facts change
What changes from default OpenClaw?
- New
memory/entities/folder with wiki pages graph.jsonlfor knowledge tripletsRECALL-CONTEXT.mdupdated at session startgarden.yamlconfig file
Everything is markdown files. No database. Works offline.
The Nightly Sleep Cycle
Run this on a cron (or manually). It's your agent's equivalent of sleep:
garden extract # Read today's logs โ create/update entity wiki pages
garden surprise # Score events by prediction error (what was unexpected?)
garden consolidate # Promote high-surprise events to MEMORY.md
garden beliefs --drift --apply # Update identity model if beliefs shifted
garden prune --days 30 # Archive entities inactive >30 days
Retrieval (no LLM needed)
garden recall "Acme" # Search entities + graph
garden context "job search" --budget 4000 # Token-budget assembly
garden evaluate --text "Agent said X" # Fact-check against knowledge graph
garden beliefs # View identity model
How Memory Actually Works
1. Entity Extraction
garden extract reads a daily log and creates one .md file per entity:
# Acme
**Type:** company
## Facts
- AI web scraping startup (YC W24)
## Timeline
### [[2026-02-16]]
- [[Alex]] received reply from [[Jane Smith]] after [[HN]] outreach
- [[Revenue Hunter]] sent cold email to contact@acme.com
Each [[wikilink]] is an edge in the knowledge graph. The graph emerges from the text โ no schema, no migration.
2. Surprise Scoring
Not all memories are equal. MindGardener uses prediction error to score importance:
- Read the agent's current world model (
MEMORY.md) - Predict what should have happened today
- Compare prediction against what actually happened
- Score the delta: high surprise โ important, low surprise โ routine
This is how biological memory works โ you remember the unexpected, not the routine. Ported from SOAR's impasse-driven chunking (Laird, 2012) to LLM agents.
3. Context Assembly (v2)
garden context solves the "load everything" problem. Instead of dumping all memory into context, it:
- Scores all entities against your query (fuzzy matching, Levenshtein, initials)
- Follows
[[wikilinks]]โ 1-hop graph traversal to find related entities - Includes matching graph triplets
- Adds relevant lines from recent daily logs
- Includes MEMORY.md excerpts
- All within a token budget โ 4000 tokens? Only the most relevant. 500? Even more selective.
Every assembly is logged with a manifest โ you can audit exactly what your agent knew (or didn't know) at any point:
{
"query": "Acme",
"token_budget": 4000,
"tokens_used": 1847,
"utilization": 0.46,
"loaded_count": 7,
"skipped_count": 2,
"skipped_reasons": ["token_budget_exceeded"]
}
All Commands
| Command | What it does | LLM? | Cost |
|---|---|---|---|
garden init |
Set up workspace | No | Free |
garden extract |
Daily log โ entity wiki + graph | Yes | ~$0.001 |
garden surprise |
Score events by prediction error | Yes | ~$0.002 |
garden consolidate |
Promote high-surprise โ MEMORY.md | Yes | ~$0.001 |
garden recall "q" |
Search entities + graph | No | Free |
garden context "q" |
Token-budget context assembly | No | Free |
garden entities |
List all known entities | No | Free |
garden prune |
Archive inactive entities | No | Free |
garden merge "a" "b" |
Merge duplicate entities | No | Free |
garden fix type "X" "t" |
Fix entity type mistakes | No | Free |
garden reindex |
Rebuild graph from entity files | No | Free |
garden viz |
Mermaid graph visualization | No | Free |
garden stats |
Quick overview | No | Free |
| v1.1 Commands | |||
garden add "fact" |
Add fact with provenance | No | Free |
garden conflicts |
List/manage detected conflicts | No | Free |
garden inject |
Generate context for injection | No | Free |
garden decay |
Show/prune decayed facts | No | Free |
garden sync |
Sync multi-agent memories | No | Free |
Only 3 commands call an LLM. Everything else is pure file operations.
LLM Providers (Optional)
MindGardener is local-first. Only 3 commands need an LLM (extract, surprise, consolidate). Everything else is pure file operations.
For fully local operation, use Ollama. Configure in garden.yaml:
extraction:
provider: google # Google Gemini (free tier: 1500 req/day)
model: gemini-2.0-flash
| Provider | Config | Cost |
|---|---|---|
| Google Gemini | provider: google |
Free tier available |
| OpenAI | provider: openai |
From $0.15/1M tokens |
| Anthropic | provider: anthropic |
From $0.25/1M tokens |
| Ollama (local) | provider: ollama |
Free |
| Any OpenAI-compatible | provider: compatible + base_url |
Varies |
Daily cost running full nightly cycle: ~$0.004/day with Gemini Flash. ~$0.12/month. $0 with Ollama.
Configuration
# garden.yaml
workspace: /path/to/workspace
memory_dir: memory/
entities_dir: memory/entities/
graph_file: memory/graph.jsonl
long_term_memory: MEMORY.md
extraction:
provider: google
model: gemini-2.0-flash
consolidation:
surprise_threshold: 0.5 # Min score to promote
decay_days: 30 # Archive after N days inactive
Architecture
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Daily Logs โโโโโโถโ Extractor โโโโโโถโ Entity Pages โ
โ (episodic) โ โ (LLM call) โ โ (semantic wiki) โ
โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Graph Store โ โ Surprise Scorer โ
โ (triplets) โ โ (prediction err) โ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Consolidator โ
โ (โ MEMORY.md) โ
โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Context Assembly โ
โ (budget-aware) โ
โโโโโโโโโโโโโโโโโโโ
Comparison
| MindGardener | Mem0 | Letta | Zep/Graphiti | Cognee | |
|---|---|---|---|---|---|
| Infrastructure | None | Neo4j + Qdrant | Cloud server | Postgres | Heavy |
| Storage format | Markdown | Opaque | Opaque | Opaque | Opaque |
| Human-readable | Yes | No | No | No | No |
| Knowledge graph | Wikilinks + JSONL | Neo4j | No | Graph DB | Graph |
| Surprise scoring | Yes | No | No | No | No |
| Token-budget retrieval | Yes | No | No | No | No |
| Context manifests | Yes | No | No | No | No |
| Manual editing | Any editor | No | /remember | No | No |
| Browse in Obsidian | Yes | No | No | No | No |
| Offline capable | Yes (Ollama) | No | No | No | No |
| Framework lock-in | None | Mem0 SDK | Letta SDK | Zep SDK | Cognee SDK |
| Install | pip install |
Docker + DBs | Cloud signup | Docker + DB | pip + deps |
Dependencies
- Python 3.10+
- PyYAML
- An LLM provider
That's it. No numpy. No torch. No vector database. No Docker.
Install size: <500KB.
Testing
$ python -m pytest tests/ -q
120 passed in 2.34s
172 tests. All run in <3 seconds. No network calls (all mocked).
File Structure
your-workspace/
โโโ garden.yaml # Config
โโโ MEMORY.md # Long-term curated memory
โโโ memory/
โโโ 2026-02-17.md # Daily log (episodic)
โโโ 2026-02-16.md
โโโ graph.jsonl # Knowledge graph triplets
โโโ surprise-scores.jsonl # What was unexpected
โโโ context-manifests.jsonl # Audit trail
โโโ entities/
โโโ Alex.md # Person
โโโ Acme.md # Company
โโโ MindGardener.md # Project
โโโ Jane-Smith.md # Person
Everything is a text file. Everything is grep-able. Everything is git-able.
Multi-Agent Support
Multiple agents can share the same entity directory. Each contributes observations; all benefit from combined knowledge. Use symlinks or shared directories โ no coordination server needed.
Research Background
MindGardener draws from cognitive science research on memory:
- Tulving (1972) โ Episodic vs semantic memory distinction
- SOAR (Laird, 2012) โ Impasse-driven chunking for procedural learning
- Generative Agents (Park et al., 2023) โ Reflection-based agent memory
- CoALA (Sumers et al., 2023) โ Formal taxonomy of agent memory architectures
- MemGPT (Packer et al., 2023) โ OS-inspired hierarchical memory management
- Everything is Context (Xu et al., 2025) โ Filesystem abstraction for context engineering
Novel contribution: Surprise-based consolidation using prediction error, and token-budget-aware context assembly with audit manifests.
Roadmap
- Entity extraction from markdown logs
- Wiki-style pages with
[[wikilinks]] - Knowledge graph (JSONL triplets)
- Surprise scoring (prediction error)
- Token-budget-aware context assembly
- Context manifests (audit trail)
- Multi-provider LLM support (5 providers)
- Multi-agent shared brain
- 172 tests
- Concurrency safety (file locks)
- Optional embedding plugin
- Incremental indexing
- Background daemon mode
- Context evaluator (fact-checking loop)
- pip package on PyPI
License
MIT
Credits
Built by a multi-agent swarm coordinating via Discord.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mindgardener-1.1.0.tar.gz.
File metadata
- Download URL: mindgardener-1.1.0.tar.gz
- Upload date:
- Size: 79.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
662713391c181b5190823fb8d3579a92627119b1f75497cb69aa778e44b16aee
|
|
| MD5 |
8fe23a4febaf4f220765cafaedcd2b0e
|
|
| BLAKE2b-256 |
72cba58ccfa508a52958eb4d8cee8e2e0c8ec0390c7a97a53536e6f8620e581e
|
Provenance
The following attestation bundles were made for mindgardener-1.1.0.tar.gz:
Publisher:
release.yml on widingmarcus-cyber/mindgardener
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mindgardener-1.1.0.tar.gz -
Subject digest:
662713391c181b5190823fb8d3579a92627119b1f75497cb69aa778e44b16aee - Sigstore transparency entry: 1049637212
- Sigstore integration time:
-
Permalink:
widingmarcus-cyber/mindgardener@b069a6d600c92bbed1e3f43c6a4d86de00a33e33 -
Branch / Tag:
refs/tags/v1.1.0 - Owner: https://github.com/widingmarcus-cyber
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@b069a6d600c92bbed1e3f43c6a4d86de00a33e33 -
Trigger Event:
release
-
Statement type:
File details
Details for the file mindgardener-1.1.0-py3-none-any.whl.
File metadata
- Download URL: mindgardener-1.1.0-py3-none-any.whl
- Upload date:
- Size: 68.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7d498d016965be842ac78f6c0c0a9f9d3d4a7a968279ee46753d04eee9022311
|
|
| MD5 |
d281fc3340a1507fc36a03b71762f267
|
|
| BLAKE2b-256 |
75fce00efe76467ba830ad852a0799f1be748da0ddf313eb709e80f8550dea22
|
Provenance
The following attestation bundles were made for mindgardener-1.1.0-py3-none-any.whl:
Publisher:
release.yml on widingmarcus-cyber/mindgardener
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mindgardener-1.1.0-py3-none-any.whl -
Subject digest:
7d498d016965be842ac78f6c0c0a9f9d3d4a7a968279ee46753d04eee9022311 - Sigstore transparency entry: 1049637265
- Sigstore integration time:
-
Permalink:
widingmarcus-cyber/mindgardener@b069a6d600c92bbed1e3f43c6a4d86de00a33e33 -
Branch / Tag:
refs/tags/v1.1.0 - Owner: https://github.com/widingmarcus-cyber
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@b069a6d600c92bbed1e3f43c6a4d86de00a33e33 -
Trigger Event:
release
-
Statement type: