LangChain integration for Memstate AI — persistent, structured, versioned memory for AI agents
Project description
memstate-langchain
Persistent, structured, versioned memory for LangChain agents — powered by Memstate AI.
Memstate AI gives your LangChain agents persistent memory that survives across sessions, with automatic conflict detection, version history, and semantic search. It is a direct alternative to Mem0 with superior structured storage and a transparent keypath hierarchy.
Why Memstate vs Mem0?
| Feature | Memstate AI | Mem0 |
|---|---|---|
| Keypath hierarchy | Structured dot-paths (auth.oauth.provider) |
Flat key-value |
| Version history | Full audit trail with time-travel | Limited |
| Conflict detection | Automatic cascade invalidation | Manual |
| Server-side extraction | LLM extracts keypaths from raw text | Manual structuring |
| Semantic search | Vector search with score | Vector search |
| Open source | MCP server open source | Partially open |
| Self-hostable | Yes | Yes |
| LangChain support | This package | mem0ai package |
Installation
pip install memstate-langchain
With the full LangChain stack:
pip install "memstate-langchain[langchain]"
Get your free API key at memstate.ai/dashboard.
Quick Start
Option 1: Semantic Memory for LCEL Chains
MemstateMemory wraps any LCEL chain to automatically retrieve relevant memories before
each call and persist the conversation turn afterward:
import os
from memstate_langchain import MemstateMemory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant.\n\n{relevant_memories}"),
("human", "{input}"),
])
memory = MemstateMemory(
api_key=os.environ["MEMSTATE_API_KEY"],
project_id="my-assistant",
session_id="user-alice", # optional: scope memories per user/session
)
# Wrap any LCEL chain
chain = memory.wrap(prompt | llm)
# Session 1 — the agent learns about the user
chain.invoke({"input": "My name is Alice. I prefer Python and I'm working on a FastAPI project."})
chain.invoke({"input": "I decided to use PostgreSQL for the database."})
# Session 2 (new process, same project_id) — Alice's preferences are retrieved automatically
response = chain.invoke({"input": "What database am I using?"})
print(response.content) # "Based on your previous sessions, you decided to use PostgreSQL."
Option 2: Persistent Chat Message History
Use MemstateChatMessageHistory with RunnableWithMessageHistory for the modern LangChain Expression Language (LCEL) pattern:
from memstate_langchain import MemstateChatMessageHistory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
])
chain = prompt | llm
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: MemstateChatMessageHistory(
api_key="mst_your_key_here",
project_id="my-assistant",
session_id=session_id,
),
input_messages_key="input",
history_messages_key="history",
)
# Each call with the same session_id resumes the conversation
response = chain_with_history.invoke(
{"input": "What is the capital of France?"},
config={"configurable": {"session_id": "user-alice"}},
)
Option 3: Agent Tools (ReAct / Function-Calling)
Give your agent the full Memstate toolset — identical to the Memstate MCP server:
from memstate_langchain import create_memstate_tools
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
tools = create_memstate_tools(
api_key="mst_your_key_here",
project_id="my-app",
)
prompt = ChatPromptTemplate.from_messages([
("system", """You are a helpful assistant with persistent memory.
Before starting any task, use memstate_get to check what you already know.
After completing a task, use memstate_remember to save what you learned.
Use memstate_search when you need to find information by meaning.
Memory workflow:
1. CHECK: memstate_get(keypath="") → see full project knowledge
2. ACT: Complete the task using retrieved context
3. SAVE: memstate_remember(content="## Summary\\n- What was done") → persist learnings
"""),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# The agent will automatically use Memstate to remember and recall
executor.invoke({"input": "Set up authentication for our FastAPI project using OAuth2.", "chat_history": []})
executor.invoke({"input": "What authentication approach did we decide on?", "chat_history": []})
Available Tools
When using create_memstate_tools(), the following tools are available to your agent:
| Tool | Purpose |
|---|---|
memstate_remember |
Save markdown/summaries → server extracts keypaths automatically (preferred for saving) |
memstate_set |
Set one keypath = one short value (e.g. config.port = "8080") |
memstate_get |
Browse project keypaths, get memory content |
memstate_search |
Semantic search across memories |
memstate_history |
View version history of a keypath or memory |
memstate_delete |
Soft-delete a keypath (creates tombstone, history preserved) |
Select specific tools:
tools = create_memstate_tools(
api_key="mst_...",
project_id="my-app",
include=["memstate_remember", "memstate_get", "memstate_search"],
)
Memory Architecture
Memstate uses a hierarchical keypath system to organize memories:
project.my-app
├── auth
│ ├── provider → "OAuth2 with Google"
│ └── jwt.secret_key → "stored in env var JWT_SECRET"
├── database
│ ├── type → "PostgreSQL 15"
│ └── connection_pool → "max_connections=20"
└── deployment
├── platform → "Railway"
└── domain → "api.myapp.com"
Every memory write creates a new version, preserving full history. The server automatically detects conflicts and can cascade-invalidate related memories when a fact changes.
Configuration Reference
MemstateMemory / MemstateSemanticMemory
MemstateMemory is an alias for MemstateSemanticMemory.
MemstateMemory(
api_key="mst_...", # Required: Memstate API key
project_id="my-app", # Required: project namespace
session_id="default", # Optional: scope per user/session
base_url="https://api.memstate.ai", # Optional: custom API URL
timeout=60, # Optional: HTTP timeout (seconds)
search_limit=10, # Optional: memories retrieved per turn
memory_key="relevant_memories", # Optional: key injected into chain input dict
input_key="input", # Optional: chain input key for human message
output_key="content", # Optional: chain output key for AI response
human_prefix="Human", # Optional: label for human turns
ai_prefix="AI", # Optional: label for AI turns
ingest_source="langchain", # Optional: source label in dashboard
wait_for_ingest=False, # Optional: wait for async ingest to complete
)
# Usage
memory = MemstateMemory(api_key="mst_...", project_id="my-app")
chain_with_memory = memory.wrap(prompt | llm) # wrap any LCEL chain
memories = memory.get_memories(query="auth") # semantic search
all_memories = memory.get_memories() # browse all
memory.clear() # delete all project memories
MemstateChatMessageHistory
MemstateChatMessageHistory(
api_key="mst_...", # Required: Memstate API key
project_id="my-assistant", # Required: project namespace
session_id="user-123", # Required: unique session identifier
base_url="https://api.memstate.ai", # Optional
timeout=60, # Optional
)
create_memstate_tools
create_memstate_tools(
api_key="mst_...", # Required
project_id="my-app", # Required
base_url="https://api.memstate.ai", # Optional
timeout=60, # Optional
include=None, # Optional: list of tool names to include
# e.g. include=["memstate_remember", "memstate_search"]
)
Low-Level Client
For direct API access:
from memstate_langchain import MemstateClient
client = MemstateClient(api_key="mst_...")
# Store a structured memory at an explicit keypath
client.remember(
content="PostgreSQL 15 on Railway",
keypath="project.my-app.database.type",
project_id="my-app",
category="fact",
)
# Ingest rich content (server extracts keypaths automatically)
client.ingest(
project_id="my-app",
content="## Auth Decision\nWe chose OAuth2 with Google because...",
source="agent",
)
# Semantic search
results = client.search(query="database configuration", project_id="my-app")
for r in results["results"]:
print(r["keypath"], r["score"], r["summary"])
# Browse all memories in a project
memories = client.browse(
keypath_prefix="project.my-app",
project_id="my-app",
)
# Version history
history = client.history(keypath="project.my-app.auth.provider", project_id="my-app")
Environment Variables
MEMSTATE_API_KEY=mst_your_key_here
MEMSTATE_BASE_URL=https://api.memstate.ai # optional
Using environment variables:
import os
from memstate_langchain import MemstateMemory
memory = MemstateMemory(
api_key=os.environ["MEMSTATE_API_KEY"],
project_id="my-app",
)
Running the Tests
cd integrations/langchain
pip install -e ".[dev]"
# Unit tests (no API key required — uses mocked HTTP)
pytest tests/test_unit.py -v
# Integration tests (requires MEMSTATE_API_KEY)
MEMSTATE_API_KEY=mst_... pytest tests/test_integration.py -v
# Full workflow test
MEMSTATE_API_KEY=mst_... python tests/test_workflow.py
Links
- Website: memstate.ai
- Documentation: memstate.ai/docs
- Dashboard (get API key): memstate.ai/dashboard
- MCP Server (open source): github.com/memstate-ai/memstate-mcp
- Benchmark: memstate.ai/docs/benchmarks
- npm package: @memstate/mcp
License
MIT License — see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_memstate-0.1.0.tar.gz.
File metadata
- Download URL: langchain_memstate-0.1.0.tar.gz
- Upload date:
- Size: 48.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0rc1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a372760c5f272c0b4dafc50ea077c4f901a13d951a0c2d88da63990ee97bc54
|
|
| MD5 |
9f5d865b0bb3d4e3ab59ae53a226f937
|
|
| BLAKE2b-256 |
9cab52eaa0468bb07d959369cd9233d136ab2be107431291392cb3dcc04cc09b
|
File details
Details for the file langchain_memstate-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langchain_memstate-0.1.0-py3-none-any.whl
- Upload date:
- Size: 19.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0rc1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c7d5bb367fe55383587005d1cf53b64144fb9af96cf67dd098dfc6bb84b2ed92
|
|
| MD5 |
517cd4292350863dab143a91d23025df
|
|
| BLAKE2b-256 |
0261fd405061ab8715f99fd263bb9c69892c76478ea5d5f27716cf2b421443c2
|