Skip to main content

One-line OpenMemory integration for LangChain with persistent, temporal, local-first memory.

Project description

langchain-openmemory

One-line, persistent, temporal memory for LangChain — powered by OpenMemory.

from langchain_openmemory import Memory

m = Memory()  # zero friction, no user id needed!

That’s it. Memory works as:

  • a retriever
  • a chat history backend
  • a LCEL Runnable that injects rich context
  • a persistent long-term memory across sessions

All backed by OpenMemory: local-first, temporal, explainable memory for AI agents.


Features

  • 🧠 One-line APIMemory() is all you need
  • 🪢 LangChain-native — works as a Runnable, retriever, and chat history
  • 🕒 Temporal memory — recall state across time, not just similar text
  • 📚 Multi-chat context — memory persists over many conversations
  • 💾 Local-first — backed by OpenMemory’s SQLite / engine, no vector DB required
  • 🔍 Explainable (via OpenMemory metadata) — you can inspect what was recalled and why

Installation

pip install openmemory-py langchain-core langchain-openmemory

Requires Python 3.9+.


Quickstart

1. Create memory

from langchain_openmemory import Memory

memory = Memory()  # optional: Memory("user123")

2. Use with an LLM via LCEL

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

from langchain_openmemory import Memory

memory = Memory()

prompt = ChatPromptTemplate.from_template(
    "You are a helpful assistant."
    "Here is what you remember: {context}"
    "User: {question}"
)

llm = ChatOpenAI()

chain = (
    {"context": memory, "question": RunnablePassthrough()}
    | prompt
    | llm
)

print(chain.invoke("Remember that I like dark themes and short answers."))
print(chain.invoke("What did I say about themes?"))

3. Manual recall

print(memory("what does the user prefer?"))

4. Store extra facts

memory.store("user123 loves Minecraft and Pterodactyl panels.")

How it works

Internally, Memory:

  1. Uses the Python openmemory client in local mode by default.
  2. Stores chat messages and facts into OpenMemory.
  3. Retrieves relevant memories with temporal + sector-aware ranking.
  4. Exposes a LangChain-compatible Runnable that returns a context block.
  5. Provides an internal retriever and chat history implementation.

You get:

  • real long-term memory
  • across many sessions
  • with minimal boilerplate

Using as a retriever

from langchain_openai import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain_openmemory import Memory

memory = Memory()
retriever = memory.retriever

llm = ChatOpenAI()

qa = ConversationalRetrievalChain.from_llm(
    llm,
    retriever=retriever,
    return_source_documents=True,
)

res = qa.invoke({"question": "What does this user like?"})
print(res["answer"])

Using as chat history

from langchain_core.runnables import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_openmemory import Memory

memory = Memory()

prompt = ChatPromptTemplate.from_messages(
    [("system", "You are a helpful assistant."),
     ("human", "{input}")]
)
llm = ChatOpenAI()
base_chain = prompt | llm

def get_history(session_id: str):
    return memory.history

chain = RunnableWithMessageHistory(
    base_chain,
    get_history,
    input_messages_key="input",
    history_messages_key="history",
)

print(chain.invoke({"input": "Remember that I live in Hyderabad."}, config={"configurable": {"session_id": "s1"}}))
print(chain.invoke({"input": "Where do I live?"}, config={"configurable": {"session_id": "s1"}}))

Examples

See the examples/ folder for:

  • chatbot.py — simple chatbot with persistent memory
  • agent.py — agent-style usage
  • retrieval.py — manual recall demo

Roadmap

  • Better temporal filters
  • First-class LangChain docs integration
  • Benchmarks vs vector DB + Redis memory

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_openmemory-1.0.0.tar.gz (6.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_openmemory-1.0.0-py3-none-any.whl (6.4 kB view details)

Uploaded Python 3

File details

Details for the file langchain_openmemory-1.0.0.tar.gz.

File metadata

  • Download URL: langchain_openmemory-1.0.0.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for langchain_openmemory-1.0.0.tar.gz
Algorithm Hash digest
SHA256 0c265e48744c98b75fe834d76e84e75e5b46090a0f6c88a3bc4478b4144f4fb6
MD5 194ce6a71d9d9c387f3f67dca8029a40
BLAKE2b-256 97ae7c565f156170cadc63258250cf6b48ab931ebb2befdcc7790b34ae4fb94d

See more details on using hashes here.

File details

Details for the file langchain_openmemory-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_openmemory-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a76eb2f7ce808a58178bd96c2ca93e52eaeda27dfa7d25a974acccf2842d92ab
MD5 93d891ff7b1ea07231edf90c5828562c
BLAKE2b-256 1e41a85c81eeffcbcbdb3377ecf9d0af638a2c786df388c13d6cd20a14cbe443

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page