Give your AI a memory — mine projects and conversations into a searchable palace. No API key required.
Project description
MemPalace
The highest-scoring AI memory system ever benchmarked. And it's free.
Every conversation you have with an AI — every decision, every debugging session, every architecture debate — disappears when the session ends. Six months of work, gone. You start over every time.
Other memory systems try to fix this by letting AI decide what's worth remembering. It extracts "user prefers Postgres" and throws away the conversation where you explained why. MemPalace takes a different approach: store everything, then make it findable.
The Palace — Ancient Greek orators memorized entire speeches by placing ideas in rooms of an imaginary building. Walk through the building, find the idea. MemPalace applies the same principle to AI memory: your conversations are organized into wings (people and projects), halls (types of memory), and rooms (specific ideas). No AI decides what matters — you keep every word, and the structure makes it searchable. That structure alone improves retrieval by 34%.
AAAK — To make all that data usable, MemPalace compresses it with AAAK — a lossless shorthand dialect designed for AI agents. Not meant to be read by humans — meant to be read by your AI, fast. 30x compression, zero information loss. Your AI loads months of context in ~120 tokens. Nothing else like it exists.
Local, open, adaptable — MemPalace runs entirely on your machine, on any data you have locally, without using any external API or services. It has been tested on conversations — but it can be adapted for different types of datastores. This is why we're open-sourcing it.
Quick Start · The Palace · AAAK Dialect · Benchmarks · MCP Tools
Highest LongMemEval score ever published — free or paid.
| 96.6% LongMemEval R@5 Zero API calls |
100% LongMemEval R@5 with Haiku rerank |
+34% Retrieval boost from palace structure |
$0 No subscription No cloud. Local only. |
Reproducible — runners in benchmarks/. Full results.
Quick Start
pip install mempalace
# Set up your world — who you work with, what your projects are
mempalace init ~/projects/myapp
# Mine your data
mempalace mine ~/projects/myapp # projects — code, docs, notes
mempalace mine ~/chats/ --mode convos # convos — Claude, ChatGPT, Slack exports
mempalace mine ~/chats/ --mode convos --extract general # general — classifies into decisions, milestones, problems
# Search anything you've ever discussed
mempalace search "why did we switch to GraphQL"
# Your AI remembers
mempalace status
Three mining modes: projects (code and docs), convos (conversation exports), and general (auto-classifies into decisions, preferences, milestones, problems, and emotional context). Everything stays on your machine.
The Problem
Decisions happen in conversations now. Not in docs. Not in Jira. In conversations with Claude, ChatGPT, Copilot. The reasoning, the tradeoffs, the "we tried X and it failed because Y" — all trapped in chat windows that evaporate when the session ends.
Six months of daily AI use = 19.5 million tokens. That's every decision, every debugging session, every architecture debate. Gone.
| Approach | Tokens loaded | Annual cost |
|---|---|---|
| Paste everything | 19.5M — doesn't fit any context window | Impossible |
| LLM summaries | ~650K | ~$507/yr |
| MemPalace wake-up | ~170 tokens | ~$0.70/yr |
| MemPalace + 5 searches | ~13,500 tokens | ~$10/yr |
MemPalace loads 170 tokens of critical facts on wake-up — your team, your projects, your preferences. Then searches only when needed. $10/year to remember everything vs $507/year for summaries that lose context.
How It Works
The Palace
┌─────────────────────────────────────────────────────────────┐
│ WING: Person │
│ │
│ ┌──────────┐ ──hall── ┌──────────┐ │
│ │ Room A │ │ Room B │ │
│ └────┬─────┘ └──────────┘ │
│ │ │
│ ▼ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Closet │ ───▶ │ Drawer │ │
│ └──────────┘ └──────────┘ │
└─────────┼──────────────────────────────────────────────────┘
│
tunnel
│
┌─────────┼──────────────────────────────────────────────────┐
│ WING: Project │
│ │ │
│ ┌────┴─────┐ ──hall── ┌──────────┐ │
│ │ Room A │ │ Room C │ │
│ └────┬─────┘ └──────────┘ │
│ │ │
│ ▼ │
│ ┌──────────┐ ┌──────────┐ │
│ │ Closet │ ───▶ │ Drawer │ │
│ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────┘
Wings — a person or project. As many as you need. Rooms — specific topics within a wing. Auth, billing, deploy — endless rooms. Halls — connections between related rooms within the same wing. If Room A (auth) and Room B (security) are related, a hall links them. Tunnels — connections between wings. When Person A and a Project both have a room about "auth," a tunnel cross-references them automatically. Closets — compressed memories stored in AAAK. Fast for AI to read. Drawers — the original verbatim transcripts. The exact words, never summarized.
Halls are memory types — the same in every wing, acting as corridors:
hall_facts— decisions made, choices locked inhall_events— sessions, milestones, debugginghall_discoveries— breakthroughs, new insightshall_preferences— habits, likes, opinionshall_advice— recommendations and solutions
Rooms are named ideas — auth-migration, graphql-switch, ci-pipeline. When the same room appears in different wings, it creates a tunnel — connecting the same topic across domains:
wing_kai / hall_events / auth-migration → "Kai debugged the OAuth token refresh"
wing_driftwood / hall_facts / auth-migration → "team decided to migrate auth to Clerk"
wing_priya / hall_advice / auth-migration → "Priya approved Clerk over Auth0"
Same room. Three wings. The tunnel connects them.
Why Structure Matters
Tested on 22,000+ real conversation memories:
Search all closets: 60.9% R@10
Search within wing: 73.1% (+12%)
Search wing + hall: 84.8% (+24%)
Search wing + room: 94.8% (+34%)
Wings and rooms aren't cosmetic. They're a 34% retrieval improvement. The palace structure is the product.
The Memory Stack
| Layer | What | Size | When |
|---|---|---|---|
| L0 | Identity — who is this AI? | ~50 tokens | Always loaded |
| L1 | Critical facts — team, projects, preferences | ~120 tokens (AAAK) | Always loaded |
| L2 | Room recall — recent sessions, current project | On demand | When topic comes up |
| L3 | Deep search — semantic query across all closets | On demand | When explicitly asked |
Your AI wakes up with L0 + L1 (~170 tokens) and knows your world. Searches only fire when needed.
AAAK Compression
AAAK is a lossless dialect — 30x compression, readable by any LLM without a decoder.
English (~1000 tokens):
Priya manages the Driftwood team: Kai (backend, 3 years), Soren (frontend),
Maya (infrastructure), and Leo (junior, started last month). They're building
a SaaS analytics platform. Current sprint: auth migration to Clerk.
Kai recommended Clerk over Auth0 based on pricing and DX.
AAAK (~120 tokens):
TEAM: PRI(lead) | KAI(backend,3yr) SOR(frontend) MAY(infra) LEO(junior,new)
PROJ: DRIFTWOOD(saas.analytics) | SPRINT: auth.migration→clerk
DECISION: KAI.rec:clerk>auth0(pricing+dx) | ★★★★
Same information. 8x fewer tokens. Your AI learns AAAK automatically from the MCP server — no manual setup.
Contradiction Detection
MemPalace catches mistakes before they reach you:
Input: "Soren finished the auth migration"
Output: 🔴 AUTH-MIGRATION: attribution conflict — Maya was assigned, not Soren
Input: "Kai has been here 2 years"
Output: 🟡 KAI: wrong_tenure — records show 3 years (started 2023-04)
Input: "The sprint ends Friday"
Output: 🟡 SPRINT: stale_date — current sprint ends Thursday (updated 2 days ago)
Facts checked against the knowledge graph. Ages, dates, and tenures calculated dynamically — not hardcoded.
Real-World Examples
Solo developer across multiple projects
# Mine each project's conversations
mempalace mine ~/chats/orion/ --mode convos --wing orion
mempalace mine ~/chats/nova/ --mode convos --wing nova
mempalace mine ~/chats/helios/ --mode convos --wing helios
# Six months later: "why did I use Postgres here?"
mempalace search "database decision" --wing orion
# → "Chose Postgres over SQLite because Orion needs concurrent writes
# and the dataset will exceed 10GB. Decided 2025-11-03."
# Cross-project search
mempalace search "rate limiting approach"
# → finds your approach in Orion AND Nova, shows the differences
Team lead managing a product
# Mine Slack exports and AI conversations
mempalace mine ~/exports/slack/ --mode convos --wing driftwood
mempalace mine ~/.claude/projects/ --mode convos
# "What did Soren work on last sprint?"
mempalace search "Soren sprint" --wing driftwood
# → 14 closets: OAuth refactor, dark mode, component library migration
# "Who decided to use Clerk?"
mempalace search "Clerk decision" --wing driftwood
# → "Kai recommended Clerk over Auth0 — pricing + developer experience.
# Team agreed 2026-01-15. Maya handling the migration."
Before mining: split mega-files
Some transcript exports concatenate multiple sessions into one huge file:
mempalace split ~/chats/ # split into per-session files
mempalace split ~/chats/ --dry-run # preview first
mempalace split ~/chats/ --min-sessions 3 # only split files with 3+ sessions
Knowledge Graph
Temporal entity-relationship triples — like Zep's Graphiti, but SQLite instead of Neo4j. Local and free.
from mempalace.knowledge_graph import KnowledgeGraph
kg = KnowledgeGraph()
kg.add_triple("Kai", "works_on", "Orion", valid_from="2025-06-01")
kg.add_triple("Maya", "assigned_to", "auth-migration", valid_from="2026-01-15")
kg.add_triple("Maya", "completed", "auth-migration", valid_from="2026-02-01")
# What's Kai working on?
kg.query_entity("Kai")
# → [Kai → works_on → Orion (current), Kai → recommended → Clerk (2026-01)]
# What was true in January?
kg.query_entity("Maya", as_of="2026-01-20")
# → [Maya → assigned_to → auth-migration (active)]
# Timeline
kg.timeline("Orion")
# → chronological story of the project
Facts have validity windows. When something stops being true, invalidate it:
kg.invalidate("Kai", "works_on", "Orion", ended="2026-03-01")
Now queries for Kai's current work won't return Orion. Historical queries still will.
| Feature | MemPalace | Zep (Graphiti) |
|---|---|---|
| Storage | SQLite (local) | Neo4j (cloud) |
| Cost | Free | $25/mo+ |
| Temporal validity | Yes | Yes |
| Self-hosted | Always | Enterprise only |
| Privacy | Everything local | SOC 2, HIPAA |
Agent Diary
Every AI agent gets a personal journal — written in AAAK, persists across sessions.
mempalace_diary_write("Kai-assistant",
"SESSION:2026-04-04|debugged.orion.timeout|root.cause:connection.pool.exhaustion|fix:pgbouncer|★★★")
mempalace_diary_read("Kai-assistant", last_n=5)
# → last 5 diary entries from this agent, compressed in AAAK
Not a shared scratchpad — a personal journal with history. Each agent records what it worked on, what it learned, what matters. The next session reads the diary and picks up where it left off.
Letta charges $20–200/mo for agent-managed memory. MemPalace does it with a wing.
MCP Server
claude mcp add mempalace -- python -m mempalace.mcp_server
19 Tools
Palace (read)
| Tool | What |
|---|---|
mempalace_status |
Palace overview + AAAK spec + memory protocol |
mempalace_list_wings |
Wings with counts |
mempalace_list_rooms |
Rooms within a wing |
mempalace_get_taxonomy |
Full wing → room → count tree |
mempalace_search |
Semantic search with wing/room filters |
mempalace_check_duplicate |
Check before filing |
mempalace_get_aaak_spec |
AAAK dialect reference |
Palace (write)
| Tool | What |
|---|---|
mempalace_add_drawer |
File verbatim content |
mempalace_delete_drawer |
Remove by ID |
Knowledge Graph
| Tool | What |
|---|---|
mempalace_kg_query |
Entity relationships with time filtering |
mempalace_kg_add |
Add facts |
mempalace_kg_invalidate |
Mark facts as ended |
mempalace_kg_timeline |
Chronological entity story |
mempalace_kg_stats |
Graph overview |
Navigation
| Tool | What |
|---|---|
mempalace_traverse |
Walk the graph from a room across wings |
mempalace_find_tunnels |
Find rooms bridging two wings |
mempalace_graph_stats |
Graph connectivity overview |
Agent Diary
| Tool | What |
|---|---|
mempalace_diary_write |
Write AAAK diary entry |
mempalace_diary_read |
Read recent diary entries |
The AI learns AAAK and the memory protocol automatically from the mempalace_status response. No manual configuration.
Auto-Save Hooks
Two hooks for Claude Code that automatically save memories during work:
Save Hook — every 15 messages, triggers a structured save. Topics, decisions, quotes, code changes. Also regenerates the critical facts layer.
PreCompact Hook — fires before context compression. Emergency save before the window shrinks.
{
"hooks": {
"Stop": [{"matcher": "", "hooks": [{"type": "command", "command": "/path/to/mempalace/hooks/mempal_save_hook.sh"}]}],
"PreCompact": [{"matcher": "", "hooks": [{"type": "command", "command": "/path/to/mempalace/hooks/mempal_precompact_hook.sh"}]}]
}
}
Benchmarks
Tested on standard academic benchmarks — reproducible, published datasets.
| Benchmark | Mode | Score | API Calls |
|---|---|---|---|
| LongMemEval R@5 | Raw (ChromaDB only) | 96.6% | Zero |
| LongMemEval R@5 | Hybrid + Haiku rerank | 100% (500/500) | ~500 |
| LoCoMo R@10 | Raw, session level | 60.3% | Zero |
| Personal palace R@10 | Heuristic bench | 85% | Zero |
| Palace structure impact | Wing+room filtering | +34% R@10 | Zero |
The 96.6% raw score is the highest published LongMemEval result requiring no API key, no cloud, and no LLM at any stage.
vs Published Systems
| System | LongMemEval R@5 | API Required | Cost |
|---|---|---|---|
| MemPalace (hybrid) | 100% | Optional | Free |
| Supermemory ASMR | ~99% | Yes | — |
| MemPalace (raw) | 96.6% | None | Free |
| Mastra | 94.87% | Yes (GPT) | API costs |
| Mem0 | ~85% | Yes | $19–249/mo |
| Zep | ~85% | Yes | $25/mo+ |
All Commands
# Setup
mempalace init <dir> # guided onboarding + AAAK bootstrap
# Mining
mempalace mine <dir> # mine project files
mempalace mine <dir> --mode convos # mine conversation exports
mempalace mine <dir> --mode convos --wing myapp # tag with a wing name
# Splitting
mempalace split <dir> # split concatenated transcripts
mempalace split <dir> --dry-run # preview
# Search
mempalace search "query" # search everything
mempalace search "query" --wing myapp # within a wing
mempalace search "query" --room auth-migration # within a room
# Memory stack
mempalace wake-up # load L0 + L1 context
mempalace wake-up --wing driftwood # project-specific
# Compression
mempalace compress --wing myapp # AAAK compress
# Status
mempalace status # palace overview
All commands accept --palace <path> to override the default location.
Configuration
Global (~/.mempalace/config.json)
{
"palace_path": "/custom/path/to/palace",
"collection_name": "mempalace_drawers",
"people_map": {"Kai": "KAI", "Priya": "PRI"}
}
Wing config (~/.mempalace/wing_config.json)
Generated by mempalace init. Maps your people and projects to wings:
{
"default_wing": "wing_general",
"wings": {
"wing_kai": {"type": "person", "keywords": ["kai", "kai's"]},
"wing_driftwood": {"type": "project", "keywords": ["driftwood", "analytics", "saas"]}
}
}
Identity (~/.mempalace/identity.txt)
Plain text. Becomes Layer 0 — loaded every session.
File Reference
| File | What |
|---|---|
cli.py |
CLI entry point |
config.py |
Configuration loading and defaults |
normalize.py |
Converts 5 chat formats to standard transcript |
mcp_server.py |
MCP server — 19 tools, AAAK auto-teach, memory protocol |
miner.py |
Project file ingest |
convo_miner.py |
Conversation ingest — chunks by exchange pair |
searcher.py |
Semantic search via ChromaDB |
layers.py |
4-layer memory stack |
dialect.py |
AAAK compression — 30x lossless |
knowledge_graph.py |
Temporal entity-relationship graph (SQLite) |
palace_graph.py |
Room-based navigation graph |
onboarding.py |
Guided setup — generates AAAK bootstrap + wing config |
entity_registry.py |
Entity code registry |
entity_detector.py |
Auto-detect people and projects from content |
split_mega_files.py |
Split concatenated transcripts into per-session files |
hooks/mempal_save_hook.sh |
Auto-save every N messages |
hooks/mempal_precompact_hook.sh |
Emergency save before compaction |
Project Structure
mempalace/
├── README.md ← you are here
├── mempalace/ ← core package (README)
│ ├── cli.py ← CLI entry point
│ ├── mcp_server.py ← MCP server (19 tools)
│ ├── knowledge_graph.py ← temporal entity graph
│ ├── palace_graph.py ← room navigation graph
│ ├── dialect.py ← AAAK compression
│ ├── miner.py ← project file ingest
│ ├── convo_miner.py ← conversation ingest
│ ├── searcher.py ← semantic search
│ ├── onboarding.py ← guided setup
│ └── ... ← see mempalace/README.md
├── benchmarks/ ← reproducible benchmark runners
│ ├── README.md ← reproduction guide
│ ├── BENCHMARKS.md ← full results + methodology
│ ├── longmemeval_bench.py ← LongMemEval runner
│ ├── locomo_bench.py ← LoCoMo runner
│ └── membench_bench.py ← MemBench runner
├── hooks/ ← Claude Code auto-save hooks
│ ├── README.md ← hook setup guide
│ ├── mempal_save_hook.sh ← save every N messages
│ └── mempal_precompact_hook.sh ← emergency save
├── examples/ ← usage examples
│ ├── basic_mining.py
│ ├── convo_import.py
│ └── mcp_setup.md
├── tests/ ← test suite (README)
├── assets/ ← logo + brand assets
└── pyproject.toml ← package config (v3.0.0)
Requirements
- Python 3.9+
chromadb>=0.4.0pyyaml>=6.0
No API key. No internet after install. Everything local.
pip install mempalace
Contributing
PRs welcome. See CONTRIBUTING.md for setup and guidelines.
License
MIT — see LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mempalace-3.0.0.tar.gz.
File metadata
- Download URL: mempalace-3.0.0.tar.gz
- Upload date:
- Size: 85.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
64f7c22d0fc50e26d0cd7746325e091e041f8863182e09c47b688bde070925c6
|
|
| MD5 |
2d4e7f9161341f269ba565500ca1b16b
|
|
| BLAKE2b-256 |
c88d708871a68132aebaaed39acf440c139eb0673a1a391cc742ed3970445dbe
|
File details
Details for the file mempalace-3.0.0-py3-none-any.whl.
File metadata
- Download URL: mempalace-3.0.0-py3-none-any.whl
- Upload date:
- Size: 86.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a808ebd39da5c531ecb13dd1d6b2c7583029db962a44dd0785a6285b5f7fc82d
|
|
| MD5 |
c7c93e15358deeded37e8472adc597a3
|
|
| BLAKE2b-256 |
82c6a7b7ca081c4e992a33e99406bfc772567402eb49b8d9e9d9526b39e21bdb
|