Reusable semantic memory library for LangGraph agents with PostgreSQL/pgvector and SQLite backends
Reason this release was yanked:
Empty release
Project description
memable 🐘
Long-term semantic memory for AI agents. Elephants never forget.
Drop-in long-term memory with:
- Durability tiers — core facts vs situational context vs episodic memories
- Temporal awareness — validity windows, expiry, recency weighting
- Version chains — audit trail for memory updates with contradiction handling
- Scoped namespaces — org/user/project hierarchies with priority merging
- Memory consolidation — decay, summarize, and prune old memories
- LangGraph integration — ready-to-use nodes for retrieve/store/consolidate
Need Help?
I'll add production-grade memory to your AI agent in 1-2 weeks.
- 📞 Consult ($500) — 2-hour architecture deep-dive
- 🛠️ Implementation ($3-5k) — Full memory system, integrated + tested
Installation
pip install memable
Or for development:
git clone https://github.com/joelash/memable
cd memable
pip install -e ".[dev]"
Quick Start
from memable import build_postgres_store
from memable.graph import build_memory_graph
# Connect to your Neon/Postgres DB (context manager handles connection lifecycle)
with build_postgres_store("postgresql://user:pass@host:5432/dbname") as store:
store.setup() # Run migrations (once)
# Build a graph with memory baked in
graph = build_memory_graph()
compiled = graph.compile(store=store.raw_store)
# Run it
config = {"configurable": {"user_id": "user_123"}}
result = compiled.invoke(
{"messages": [{"role": "user", "content": "I'm Joel, I live in Wheaton."}]},
config=config,
)
Memory Schema
Each memory item includes:
{
"text": "User lives in Wheaton, IL",
"durability": "core", # core | situational | episodic
"valid_from": "2026-02-06", # when this became true
"valid_until": None, # null = permanent
"confidence": 0.95,
"source": "explicit", # explicit | inferred
"supersedes": None, # UUID of memory this replaces (version chain)
"superseded_by": None, # UUID of memory that replaced this
}
Durability Tiers
| Tier | Description | Example | Default TTL |
|---|---|---|---|
core |
Stable facts about the user | "Name is Joel", "Prefers dark mode" | Never expires |
situational |
Temporary context | "Visiting Ohio this week" | Explicit end date |
episodic |
Things that happened | "We discussed the API design" | 30 days, decays |
Features
Version Chains (Contradiction Handling)
When a memory contradicts an existing one, we don't delete — we create a version chain:
# Original: "User lives in Wheaton"
# New info: "User moved to Austin"
# Result:
# - Old memory gets superseded_by = new_memory_id
# - New memory gets supersedes = old_memory_id
# - Retrieval only returns current (non-superseded) memories
# - Audit trail preserved for debugging
Scoped Namespaces
# Retrieval merges across scopes with priority
retrieve_memories(
store=store,
scopes=[
("org_123", "user_456", "preferences"), # highest priority
("org_123", "shared"), # org-wide fallback
],
query="user preferences",
)
Memory Consolidation
from memable import consolidate_memories
# Periodic cleanup job
consolidate_memories(
store=store,
user_id="user_123",
strategy="summarize_and_prune",
older_than_days=7,
)
LangGraph Nodes
Pre-built nodes for your graph:
from memable.nodes import (
retrieve_memories_node,
store_memories_node,
consolidate_memories_node,
)
builder = StateGraph(MessagesState)
builder.add_node("retrieve", retrieve_memories_node)
builder.add_node("llm", your_llm_node)
builder.add_node("store", store_memories_node)
builder.add_edge(START, "retrieve")
builder.add_edge("retrieve", "llm")
builder.add_edge("llm", "store")
builder.add_edge("store", END)
Performance & Costs
Storage Requirements
| Scale | Memories | SQLite | DuckDB | Postgres |
|---|---|---|---|---|
| Light user | 100 | ~700 KB | ~3 MB | ~700 KB |
| Regular user | 1,000 | ~7 MB | ~30 MB | ~7 MB |
| Heavy user | 10,000 | ~70 MB | ~300 MB | ~70 MB |
| Power user | 100,000 | ~700 MB | ~3 GB | ~700 MB |
Embeddings dominate storage: 1536 dims × 4 bytes = ~6KB per memory
API Costs (text-embedding-3-small)
| Usage | Daily Tokens | Daily Cost | Monthly Cost |
|---|---|---|---|
| Light (100 adds, 500 searches) | 7,000 | $0.0001 | $0.00 |
| Medium (500 adds, 2,000 searches) | 30,000 | $0.0006 | $0.02 |
| Heavy (2,000 adds, 10,000 searches) | 140,000 | $0.0028 | $0.08 |
Extraction Costs (gpt-4.1-mini)
If using LLM-based memory extraction:
| Usage | Daily Cost | Monthly Cost |
|---|---|---|
| Light (50 extractions) | $0.007 | $0.20 |
| Medium (200 extractions) | $0.027 | $0.81 |
| Heavy (1,000 extractions) | $0.135 | $4.05 |
Total cost for a typical agent (100 conversations/day): ~$0.08-0.50/month
Run pytest tests/performance/ -v -s to benchmark on your hardware.
Configuration
Environment variables:
OPENAI_API_KEY=sk-... # For embeddings
DATABASE_URL=postgresql://... # Postgres connection
Multi-Tenant / Schema Isolation
For multi-tenant deployments where each customer needs isolated data, you can use PostgreSQL schemas:
from memable import build_store
# Each tenant gets their own schema
with build_store("postgresql://...", schema="customer_123") as store:
store.setup() # Creates tables in customer_123 schema
store.add(namespace, memory)
Requirements:
- The schema must already exist in the database (
CREATE SCHEMA customer_123;) - Tables will be created within that schema when
setup()is called - Each schema has its own isolated set of tables
Database Tables
memable uses LangGraph's PostgresStore under the hood, which creates:
| Table | Purpose |
|---|---|
store |
Memory documents with metadata |
store_vectors |
pgvector embeddings for semantic search |
store_migrations |
Migration version tracking |
Note: Table names are currently fixed by LangGraph. If you need custom table names (e.g., prefixes/suffixes), use schema-based isolation instead, or run each app in a separate PostgreSQL schema.
Alternative pattern: For apps that already use schema-per-tenant, you could combine with a suffix:
-- Example: customer schemas with memory suffix
CREATE SCHEMA customer_123_memories;
with build_store("postgresql://...", schema="customer_123_memories") as store:
store.setup()
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file memable-0.2.0-py3-none-any.whl.
File metadata
- Download URL: memable-0.2.0-py3-none-any.whl
- Upload date:
- Size: 5.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
588e9a06035a443539058df361888c7b2177ec2b7f8e6f7cbcec9817a18edf5d
|
|
| MD5 |
417cfd69fdf329bd4e5d945228a7b9fe
|
|
| BLAKE2b-256 |
3cbe8396965e1eda7485369fb4c74b40caa58b9c2addc5400e6c0adec52da8d0
|