Local-first persistent memory for AI agents - store, recall, and consolidate knowledge across sessions using FAISS, SQLite, and any LLM
Project description
consolidation-memory
Local-first persistent memory for coding agents.
consolidation-memory stores raw episodes, consolidates them into structured knowledge, and now tracks claim-level provenance and contradiction history. It runs on SQLite + FAISS and can be used through MCP, Python, REST, or OpenAI-style function calling.
What It Does
- Stores episodes (
exchange,fact,solution,preference) with vector embeddings. - Recalls by semantic + keyword ranking with metadata boosts.
- Consolidates episodes into knowledge topics and structured records:
fact,solution,preference,procedure
- Tracks temporal validity (
valid_from,valid_until) and supportsas_ofrecall queries. - Logs contradictions and supports contradiction-aware merge behavior.
- Maintains a claim graph:
- claims
- claim edges (for example,
contradicts) - claim sources
- claim events
- Extracts and persists episode anchors (paths, commits, tool references).
- Detects code drift from git changes and challenges impacted claims with audit events.
- Uses an adaptive consolidation scheduler (utility score + interval fallback).
- Returns claim results in
recall()alongside episodes, topics, and records.
Quick Start
pip install consolidation-memory[fastembed]
consolidation-memory init
consolidation-memory test
consolidation-memory serve
Notes:
fastembedis local and does not require API keys.- Consolidation requires an LLM backend (
lmstudio,ollama,openai) unless explicitly disabled.
MCP Server
Add to your MCP client config:
{
"mcpServers": {
"consolidation_memory": {
"command": "consolidation-memory",
"args": ["--project", "universal", "serve"]
}
}
}
Use one shared project name (for example, universal) across all clients
to keep a single knowledge set.
Available tools:
memory_storememory_store_batchmemory_recallmemory_searchmemory_claim_browsememory_claim_searchmemory_detect_driftmemory_statusmemory_forgetmemory_exportmemory_correctmemory_compactmemory_consolidatememory_protectmemory_timelinememory_contradictionsmemory_browsememory_read_topicmemory_decay_reportmemory_consolidation_log
Python API
from consolidation_memory import MemoryClient
with MemoryClient(auto_consolidate=False) as mem:
mem.store(
"User prefers dark mode in terminal tools.",
content_type="preference",
tags=["ui", "terminal"],
)
result = mem.recall(
"terminal preferences",
n_results=5,
include_knowledge=True,
as_of="2026-03-01T00:00:00+00:00",
)
print("episodes:", len(result.episodes))
print("knowledge topics:", len(result.knowledge))
print("records:", len(result.records))
print("claims:", len(result.claims))
print("warnings:", result.warnings)
RecallResult is backward compatible and includes:
episodesknowledgerecordsclaimswarnings
Consolidation Model
episodes -> SQLite + FAISS
-> recall (semantic + keyword)
background consolidation:
episodes -> cluster -> extract/merge records -> knowledge topics
-> contradiction detection -> temporal expiration + audit log
-> claim emission (claims/sources/events/edges)
Claims And Anchors
Claim graph
Consolidation emits deterministic claims for merged records and writes:
claims: normalized claim payload and lifecycle stateclaim_sources: links to episodes/topics/recordsclaim_events:create,update,expire,contradiction, etc.claim_edges: relationship graph (for example,contradicts)
Claim retrieval is exposed through:
- Python:
MemoryClient.browse_claims(...)andMemoryClient.search_claims(...) - MCP/OpenAI tools:
memory_claim_browseandmemory_claim_search - REST:
POST /memory/claims/browseandPOST /memory/claims/search - Temporal claim-state queries: pass
as_ofto claim browse/search interfaces
Anchor persistence
Stored episode content is parsed for anchors and written to episode_anchors:
- file paths (POSIX + Windows)
- commit hashes
- tool references (
pytest,uvicorn,docker,git, etc.)
Anchors are used for drift workflows and claim challenge operations.
Drift detection interfaces
- CLI:
consolidation-memory detect-drift [--base-ref origin/main] [--repo-path <path>] - Python:
MemoryClient.detect_drift(base_ref=..., repo_path=...) - REST:
POST /memory/detect-drift
Drift detection maps changed files to anchored claims, challenges impacted active
claims, and records claim_events with event type code_drift_detected.
REST API
Install extras and run:
pip install consolidation-memory[rest]
consolidation-memory serve --rest --host 127.0.0.1 --port 8080
Endpoints:
GET /healthPOST /memory/storePOST /memory/store/batchPOST /memory/recallPOST /memory/searchPOST /memory/claims/browsePOST /memory/claims/searchPOST /memory/detect-driftGET /memory/statusDELETE /memory/episodes/{episode_id}POST /memory/consolidatePOST /memory/correctPOST /memory/exportPOST /memory/compactGET /memory/browseGET /memory/topics/{filename}POST /memory/timelinePOST /memory/contradictionsPOST /memory/protectPOST /memory/consolidation-logGET /memory/decay-report
OpenAI Function Calling
Use the provided tool schemas and dispatch helper:
from consolidation_memory import MemoryClient
from consolidation_memory.schemas import openai_tools, dispatch_tool_call
client = MemoryClient(auto_consolidate=False)
# Pass openai_tools to your model
# Then route tool calls through dispatch_tool_call(client, name, arguments)
Backends
Embedding
| Backend | Local | Default model | Typical dimension |
|---|---|---|---|
fastembed (default) |
yes | BAAI/bge-small-en-v1.5 |
384 |
lmstudio |
yes | text-embedding-nomic-embed-text-v1.5 |
768 |
ollama |
yes | nomic-embed-text |
768 |
openai |
no | text-embedding-3-small |
1536 |
LLM (for consolidation/extraction)
| Backend | Notes |
|---|---|
lmstudio (default) |
local chat model |
ollama |
local chat model |
openai |
API-backed |
disabled |
store/recall only, no LLM consolidation |
Configuration
Generate config interactively:
consolidation-memory init
Default config file locations:
- Linux:
~/.config/consolidation_memory/config.toml - macOS:
~/Library/Application Support/consolidation_memory/config.toml - Windows:
%APPDATA%\\consolidation_memory\\config.toml - Override path:
CONSOLIDATION_MEMORY_CONFIG
Every scalar config field can be overridden with:
CONSOLIDATION_MEMORY_<FIELD_NAME>
Examples:
CONSOLIDATION_MEMORY_EMBEDDING_BACKEND=fastembed
CONSOLIDATION_MEMORY_LLM_BACKEND=lmstudio
CONSOLIDATION_MEMORY_CONSOLIDATION_INTERVAL_HOURS=6
CONSOLIDATION_MEMORY_PROJECT=work
CLI Commands
| Command | Purpose |
|---|---|
consolidation-memory serve |
start MCP server |
consolidation-memory serve --rest |
start REST server |
consolidation-memory init |
interactive setup |
consolidation-memory test |
installation/self-check |
consolidation-memory status |
show memory stats |
consolidation-memory consolidate |
run consolidation now |
consolidation-memory detect-drift |
challenge claims impacted by changed files |
consolidation-memory export |
export JSON snapshot |
consolidation-memory import PATH |
import JSON snapshot |
consolidation-memory reindex |
rebuild embeddings/index |
consolidation-memory browse |
inspect knowledge topics |
consolidation-memory setup-memory --path AGENTS.md |
write memory integration block to any instruction file |
consolidation-memory dashboard |
launch Textual dashboard |
Agent Instruction Setup
Use the vendor-neutral setup helper to add proactive recall/store guidance to your agent instructions:
consolidation-memory setup-memory --path AGENTS.md
Example targets:
consolidation-memory setup-memory --path AGENTS.md
consolidation-memory setup-memory --path .github/copilot-instructions.md
consolidation-memory setup-memory --path .cursor/rules/memory.md
Template instructions are available in
docs/recommended-agent-instructions.md.
Multi-project Isolation
Each project has isolated storage:
consolidation-memory --project work status
CONSOLIDATION_MEMORY_PROJECT=work consolidation-memory serve
This keeps separate:
- SQLite DB
- FAISS index
- knowledge topics
- consolidation logs
Data Layout
Base directory is platformdirs.user_data_dir("consolidation_memory").
Per project:
projects/<project>/
memory.db
faiss_index.bin
faiss_id_map.json
faiss_tombstones.json
knowledge/
consolidation_logs/
backups/
Export/import snapshots include:
- episodes + knowledge topics/records
- claims
- claim edges
- claim sources
- claim events
- episode anchors
Development
git clone https://github.com/charliee1w/consolidation-memory
cd consolidation-memory
pip install -e ".[all,dev]"
python scripts/smoke_builder_base.py
pytest tests/ -q
pytest tests/ -q -W error::ResourceWarning
ruff check src/ tests/
Builder-focused docs:
Community
- Contributors: CONTRIBUTORS.md
- Discussion thread: Community Feedback and Contribution Thread (v0.13.x)
- Discussion categories:
- Announcements: release notes, breaking changes, maintainer updates
- Ideas: proposals, roadmap suggestions, RFC-style feedback
- Q&A: setup help, usage questions, troubleshooting
- Show and tell: integrations, demos, success stories
- General: broad project discussion and community coordination
- Polls: community votes and preference checks
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file consolidation_memory-0.13.1.tar.gz.
File metadata
- Download URL: consolidation_memory-0.13.1.tar.gz
- Upload date:
- Size: 227.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1a1380f8ffb5334fd2c0f00639a75d10ea1a63c040a8fcd952f9c7db4c1aa81c
|
|
| MD5 |
f646efa849658aa215a4f9af6f59f2f5
|
|
| BLAKE2b-256 |
2c786e62bc1a2e7160fdbb219299a094e64bc71920eb50eccf11ed33fee32ecb
|
Provenance
The following attestation bundles were made for consolidation_memory-0.13.1.tar.gz:
Publisher:
publish.yml on charliee1w/consolidation-memory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
consolidation_memory-0.13.1.tar.gz -
Subject digest:
1a1380f8ffb5334fd2c0f00639a75d10ea1a63c040a8fcd952f9c7db4c1aa81c - Sigstore transparency entry: 1058735970
- Sigstore integration time:
-
Permalink:
charliee1w/consolidation-memory@6db2f2b73e076560b877d1255ec61f059c541faa -
Branch / Tag:
refs/tags/v0.13.1 - Owner: https://github.com/charliee1w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@6db2f2b73e076560b877d1255ec61f059c541faa -
Trigger Event:
push
-
Statement type:
File details
Details for the file consolidation_memory-0.13.1-py3-none-any.whl.
File metadata
- Download URL: consolidation_memory-0.13.1-py3-none-any.whl
- Upload date:
- Size: 161.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1776626c71bcbafa6044d5035b4a7cd9169a6cafc9e75d4197dd25eb10cadcdf
|
|
| MD5 |
e1c919d93db06446d18030e8d8560e4d
|
|
| BLAKE2b-256 |
2ffa86788f87ebd3785328ec01daf4ad8b8123f7ddc738690f3e4760f31e56b8
|
Provenance
The following attestation bundles were made for consolidation_memory-0.13.1-py3-none-any.whl:
Publisher:
publish.yml on charliee1w/consolidation-memory
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
consolidation_memory-0.13.1-py3-none-any.whl -
Subject digest:
1776626c71bcbafa6044d5035b4a7cd9169a6cafc9e75d4197dd25eb10cadcdf - Sigstore transparency entry: 1058736016
- Sigstore integration time:
-
Permalink:
charliee1w/consolidation-memory@6db2f2b73e076560b877d1255ec61f059c541faa -
Branch / Tag:
refs/tags/v0.13.1 - Owner: https://github.com/charliee1w
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@6db2f2b73e076560b877d1255ec61f059c541faa -
Trigger Event:
push
-
Statement type: