One-line OpenMemory integration for LangChain with persistent, temporal, local-first memory.
Project description
langchain-openmemory
One-line, persistent, temporal memory for LangChain — powered by OpenMemory.
from langchain_openmemory import Memory
m = Memory() # zero friction, no user id needed!
That’s it. Memory works as:
- a retriever
- a chat history backend
- a LCEL Runnable that injects rich context
- a persistent long-term memory across sessions
All backed by OpenMemory: local-first, temporal, explainable memory for AI agents.
Features
- 🧠 One-line API —
Memory()is all you need - 🪢 LangChain-native — works as a
Runnable, retriever, and chat history - 🕒 Temporal memory — recall state across time, not just similar text
- 📚 Multi-chat context — memory persists over many conversations
- 💾 Local-first — backed by OpenMemory’s SQLite / engine, no vector DB required
- 🔍 Explainable (via OpenMemory metadata) — you can inspect what was recalled and why
Installation
pip install openmemory-py langchain-core langchain-openmemory
Requires Python 3.9+.
Quickstart
1. Create memory
from langchain_openmemory import Memory
memory = Memory() # optional: Memory("user123")
2. Use with an LLM via LCEL
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openmemory import Memory
memory = Memory()
prompt = ChatPromptTemplate.from_template(
"You are a helpful assistant."
"Here is what you remember: {context}"
"User: {question}"
)
llm = ChatOpenAI()
chain = (
{"context": memory, "question": RunnablePassthrough()}
| prompt
| llm
)
print(chain.invoke("Remember that I like dark themes and short answers."))
print(chain.invoke("What did I say about themes?"))
3. Manual recall
print(memory("what does the user prefer?"))
4. Store extra facts
memory.store("user123 loves Minecraft and Pterodactyl panels.")
How it works
Internally, Memory:
- Uses the Python
openmemoryclient in local mode by default. - Stores chat messages and facts into OpenMemory.
- Retrieves relevant memories with temporal + sector-aware ranking.
- Exposes a LangChain-compatible
Runnablethat returns a context block. - Provides an internal retriever and chat history implementation.
You get:
- real long-term memory
- across many sessions
- with minimal boilerplate
Using as a retriever
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain_openmemory import Memory
memory = Memory()
retriever = memory.retriever
llm = ChatOpenAI()
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
return_source_documents=True,
)
res = qa.invoke({"question": "What does this user like?"})
print(res["answer"])
Using as chat history
from langchain_core.runnables import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_openmemory import Memory
memory = Memory()
prompt = ChatPromptTemplate.from_messages(
[("system", "You are a helpful assistant."),
("human", "{input}")]
)
llm = ChatOpenAI()
base_chain = prompt | llm
def get_history(session_id: str):
return memory.history
chain = RunnableWithMessageHistory(
base_chain,
get_history,
input_messages_key="input",
history_messages_key="history",
)
print(chain.invoke({"input": "Remember that I live in Hyderabad."}, config={"configurable": {"session_id": "s1"}}))
print(chain.invoke({"input": "Where do I live?"}, config={"configurable": {"session_id": "s1"}}))
Examples
See the examples/ folder for:
chatbot.py— simple chatbot with persistent memoryagent.py— agent-style usageretrieval.py— manual recall demo
Roadmap
- Better temporal filters
- First-class LangChain docs integration
- Benchmarks vs vector DB + Redis memory
License
MIT — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_openmemory-1.0.0.tar.gz.
File metadata
- Download URL: langchain_openmemory-1.0.0.tar.gz
- Upload date:
- Size: 6.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0c265e48744c98b75fe834d76e84e75e5b46090a0f6c88a3bc4478b4144f4fb6
|
|
| MD5 |
194ce6a71d9d9c387f3f67dca8029a40
|
|
| BLAKE2b-256 |
97ae7c565f156170cadc63258250cf6b48ab931ebb2befdcc7790b34ae4fb94d
|
File details
Details for the file langchain_openmemory-1.0.0-py3-none-any.whl.
File metadata
- Download URL: langchain_openmemory-1.0.0-py3-none-any.whl
- Upload date:
- Size: 6.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a76eb2f7ce808a58178bd96c2ca93e52eaeda27dfa7d25a974acccf2842d92ab
|
|
| MD5 |
93d891ff7b1ea07231edf90c5828562c
|
|
| BLAKE2b-256 |
1e41a85c81eeffcbcbdb3377ecf9d0af638a2c786df388c13d6cd20a14cbe443
|