Local-first AI memory system for robotics, drones, and edge AI - 100% offline capable
Project description
Shodh-Memory
Persistent memory for AI agents. Single package. Local-first. Runs offline.
Give your AI agents memory that persists across sessions, learns from experience, and runs entirely on your hardware.
Installation
pip install shodh-memory
That's it. No additional setup required. Models and runtime are bundled.
Quick Start
from shodh_memory import Memory
# Create memory (data stored locally)
memory = Memory(storage_path="./my_agent_data")
# Store memories
memory.remember("User prefers dark mode", memory_type="Decision")
memory.remember("JWT tokens expire after 24h", memory_type="Learning")
memory.remember("Deployment failed due to missing env var", memory_type="Error")
# Search semantically
results = memory.recall("user preferences", limit=5)
for r in results:
print(f"{r['content']} (importance: {r['importance']:.2f})")
# Get context summary for LLM bootstrap
summary = memory.context_summary()
print(summary["decisions"]) # Recent decisions
print(summary["learnings"]) # Recent learnings
Features
- Zero setup — Everything bundled. No API keys, no cloud, no Docker
- Semantic search — MiniLM embeddings for meaning-based retrieval
- Hebbian learning — Connections strengthen when memories are used together
- Activation decay — Unused memories fade naturally
- Idempotent — Content-hash dedup prevents duplicate memories
- Entity extraction — TinyBERT NER extracts people, orgs, locations
- 100% offline — Works on air-gapped systems
Memory Types
Different types get different importance weights:
| Type | Weight | Use for |
|---|---|---|
| Decision | +0.30 | Choices, preferences, conclusions |
| Learning | +0.25 | New knowledge acquired |
| Error | +0.25 | Mistakes to avoid |
| Discovery | +0.20 | Findings, insights |
| Pattern | +0.20 | Recurring behaviors |
| Task | +0.15 | Work items |
| Context | +0.10 | General information |
| Conversation | +0.10 | Chat history |
| Observation | +0.05 | Low-priority notes |
API Reference
Core Memory
# Store a memory
memory.remember(
content="...", # Required: the memory content
memory_type="Learning", # Optional: Decision, Learning, Error, etc.
tags=["tag1", "tag2"], # Optional: for filtering
metadata={"key": "val"} # Optional: custom metadata dict
)
# Semantic search
results = memory.recall(
query="...", # Required: search query
limit=10, # Optional: max results (default: 10)
mode="hybrid" # Optional: semantic, associative, hybrid
)
# Search by tags (no embedding needed, fast)
results = memory.recall_by_tags(tags=["preferences", "ui"], limit=20)
# Search by date range
results = memory.recall_by_date(
start="2025-12-01T00:00:00Z",
end="2025-12-20T23:59:59Z",
limit=20
)
# List all memories
memories = memory.list_memories(limit=100, memory_type="Decision")
# Get single memory by ID
mem = memory.get_memory("uuid-here")
# Get statistics
stats = memory.get_stats()
print(f"Total: {stats['total_memories']}")
Proactive Context (for Agent Loops)
# Surface relevant memories for current context
# Use in every agent loop to maintain context awareness
result = memory.proactive_context(
context="User asking about authentication", # Current conversation/task
semantic_threshold=0.65, # Min similarity (0.0-1.0)
max_results=5, # Max memories to return
auto_ingest=True, # Store context as Conversation memory
recency_weight=0.2 # Boost recent memories
)
# Returns surfaced memories with relevance scores
for mem in result["memories"]:
print(f"{mem['content'][:50]} (score: {mem['relevance_score']:.2f})")
Forget Operations
# Delete by ID
memory.forget("memory-uuid")
# Delete old memories
memory.forget_by_age(days=30)
# Delete low-importance memories
memory.forget_by_importance(threshold=0.3)
# Delete by pattern (regex)
memory.forget_by_pattern(r"test.*")
# Delete by tags
memory.forget_by_tags(["temporary", "draft"])
# Delete by date range (ISO 8601 format)
memory.forget_by_date(start="2025-11-01T00:00:00Z", end="2025-11-30T23:59:59Z")
# GDPR: Delete everything
memory.forget_all()
Context & Introspection
# Context summary for LLM bootstrap
summary = memory.context_summary(max_items=5)
# Returns: {"decisions": [...], "learnings": [...], "context": [...], "patterns": [...]}
# 3-tier memory visualization
state = memory.brain_state(longterm_limit=100)
# Returns: {"working_memory": [...], "session_memory": [...], "longterm_memory": [...], "stats": {...}}
# Memory learning activity report
report = memory.consolidation_report(since="2025-12-19T00:00:00Z")
# Returns: strengthening events, decay events, edge formations, pruned associations
# Raw consolidation events
events = memory.consolidation_events(since="2025-12-19T00:00:00Z")
# Knowledge graph statistics
graph = memory.graph_stats()
print(f"Nodes: {graph['node_count']}, Edges: {graph['edge_count']}")
# Flush to disk
memory.flush()
Index Health & Maintenance
# Verify vector index integrity
report = memory.verify_index()
print(f"Healthy: {report['is_healthy']}, Orphaned: {report['orphaned_count']}")
# Repair orphaned memories (re-index missing entries)
result = memory.repair_index()
print(f"Repaired: {result['repaired']}, Failed: {result['failed']}")
# Get detailed index health metrics
health = memory.index_health()
print(f"Vectors: {health['total_vectors']}, Needs rebuild: {health['needs_rebuild']}")
LLM Framework Integration
LangChain
from shodh_memory.integrations.langchain import ShodhMemory
# Use as LangChain memory
memory = ShodhMemory(storage_path="./langchain_data")
LlamaIndex
from shodh_memory.integrations.llamaindex import ShodhLlamaMemory
# Use as LlamaIndex memory
memory = ShodhLlamaMemory(storage_path="./llamaindex_data")
Performance
Measured on Intel i7-1355U (10 cores, 1.7GHz):
| Operation | Latency |
|---|---|
remember() |
55-60ms |
recall() (semantic) |
34-58ms |
recall_by_tags() |
~1ms |
list_memories() |
~1ms |
Architecture
Experiences flow through three tiers based on Cowan's working memory model:
Working Memory ──overflow──> Session Memory ──importance──> Long-Term Memory
(100 items) (500 MB) (RocksDB)
Cognitive processing:
- Spreading activation retrieval
- Activation decay (exponential)
- Hebbian strengthening (co-retrieval strengthens connections)
- Long-term potentiation (frequently-used connections become permanent)
- Memory replay during maintenance
- Interference detection
Platform Support
| Platform | Status |
|---|---|
| Windows x86_64 | Supported |
| Linux x86_64 | Supported |
| macOS ARM64 (Apple Silicon) | Supported |
| macOS x86_64 (Intel) | Supported |
Links
License
Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file shodh_memory-0.1.90.tar.gz.
File metadata
- Download URL: shodh_memory-0.1.90.tar.gz
- Upload date:
- Size: 945.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
717a8477dbd331ed5aa526a824fd7ad0eae18a1bfac7c02718e0b334407a3c49
|
|
| MD5 |
c9ebeeb6858a2235b91e083c230fbc11
|
|
| BLAKE2b-256 |
ad05963c808aa354a3ca59cac27e7d0d2ace5085b64acad9eac1e5b615ae9e3a
|
File details
Details for the file shodh_memory-0.1.90-cp38-abi3-win_amd64.whl.
File metadata
- Download URL: shodh_memory-0.1.90-cp38-abi3-win_amd64.whl
- Upload date:
- Size: 40.9 MB
- Tags: CPython 3.8+, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
98f3b9471eba255a9a35154ee817a911ac78ba30520ecf531a65ecc499e00bb5
|
|
| MD5 |
f52d935a2503c6e89b9a18558f419ac6
|
|
| BLAKE2b-256 |
8634bd162c5237f3ab163108558d3304d1084598583f2f1ccdc6656505d74e21
|
File details
Details for the file shodh_memory-0.1.90-cp38-abi3-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: shodh_memory-0.1.90-cp38-abi3-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 44.9 MB
- Tags: CPython 3.8+, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9cb27f749fed130740102c451c3bd7de578e629ff018fcef6c52c4e96dc1b1f5
|
|
| MD5 |
d5516ddde61a69e9b948a51ab31ae0f8
|
|
| BLAKE2b-256 |
0dd57dfa9ecf3fdbe323311c2911e91521869e59e6868b0b630287d8d9b63af1
|
File details
Details for the file shodh_memory-0.1.90-cp38-abi3-macosx_11_0_arm64.whl.
File metadata
- Download URL: shodh_memory-0.1.90-cp38-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 44.6 MB
- Tags: CPython 3.8+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1f23fa16830dc8fdcd7080e20609947c400a67daf13de73ec5fe4e26a6720305
|
|
| MD5 |
a8529e6621dd949033bb02fa5ca89e56
|
|
| BLAKE2b-256 |
0ee3bcdd3a0d734687fd7a7556410311745d8d24770a373426d98d8fac03cb51
|
File details
Details for the file shodh_memory-0.1.90-cp38-abi3-macosx_10_13_x86_64.whl.
File metadata
- Download URL: shodh_memory-0.1.90-cp38-abi3-macosx_10_13_x86_64.whl
- Upload date:
- Size: 47.0 MB
- Tags: CPython 3.8+, macOS 10.13+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a140825f55bb88843a58920a547c502e30423f069abb7a293d6ba8f05a6c8b97
|
|
| MD5 |
eeb5e7c3cbea05ce7acec94604db4fd4
|
|
| BLAKE2b-256 |
7fcc9ed4fe7675cd84317f5c08d61e8138b5a6017a01a7c8db8518d31c7d9c81
|