LangChain integration for Smara Memory API — persistent memory for AI agents
Project description
smara-langchain
LangChain integration for the Smara Memory API -- persistent, decay-aware memory for AI agents.
Installation
pip install smara-langchain
Quick start
SmaraMemory -- conversation memory
Drop-in replacement for LangChain's built-in memory classes. Stores every human/AI turn as a memory in Smara and retrieves the most relevant memories on each new turn via semantic search with Ebbinghaus decay ranking.
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
from smara_langchain import SmaraMemory
memory = SmaraMemory(
api_key="sm_...",
user_id="user-42",
top_k=5, # retrieve top 5 memories per turn
)
chain = ConversationChain(
llm=ChatOpenAI(),
memory=memory,
)
chain.invoke({"input": "I'm allergic to peanuts"})
# Memory stored in Smara automatically
chain.invoke({"input": "What should I avoid eating?"})
# Smara retrieves the peanut allergy memory as context
SmaraRetriever -- RAG over memories
Use Smara as a retriever in any LangChain RAG pipeline. Each memory is
returned as a Document with the fact text and full metadata (importance,
decay score, similarity, etc.).
from smara_langchain import SmaraRetriever
retriever = SmaraRetriever(
api_key="sm_...",
user_id="user-42",
top_k=10,
score_threshold=0.3, # optional: filter low-relevance results
)
# Use with invoke()
docs = retriever.invoke("What are the user's dietary preferences?")
for doc in docs:
print(f"{doc.page_content} (score: {doc.metadata['score']})")
# Use in a RAG chain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
prompt = ChatPromptTemplate.from_template(
"Use these memories about the user to answer their question.\n\n"
"Memories:\n{context}\n\n"
"Question: {question}"
)
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| ChatOpenAI()
| StrOutputParser()
)
answer = chain.invoke("What food does the user like?")
SmaraClient -- direct API access
For lower-level control you can use the client directly.
from smara_langchain import SmaraClient
client = SmaraClient(api_key="sm_...")
# Store a memory
client.store(user_id="user-42", fact="User prefers dark mode", importance=0.7)
# Search memories
results = client.search(user_id="user-42", query="UI preferences")
# Get formatted context for an LLM prompt
ctx = client.get_context(user_id="user-42", query="UI preferences", top_n=5)
print(ctx["context"])
# Delete a memory
client.delete(memory_id="some-uuid")
Async support
All methods have async counterparts -- prefix with a:
await client.astore(user_id="user-42", fact="...")
await client.asearch(user_id="user-42", query="...")
docs = await retriever.ainvoke("query")
Configuration
| Parameter | Default | Description |
|---|---|---|
api_key |
required | Your Smara API key (sm_...) |
user_id |
required | End-user identifier for scoping memories |
base_url |
https://api.smara.io |
API base URL |
top_k |
5 (memory) / 10 (retriever) | Number of memories to retrieve |
score_threshold |
None |
Minimum blended score to include (retriever only) |
memory_key |
"memory" |
Variable name in the chain context (memory only) |
return_memories_raw |
False |
Return raw dicts instead of formatted string (memory only) |
How Smara ranking works
Smara combines semantic similarity with Ebbinghaus decay (memories naturally fade over time but are refreshed on access) and importance scoring to produce a blended relevance score. This means frequently accessed, important, and semantically relevant memories surface first -- mimicking how human memory works.
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file smara_langchain-0.1.0.tar.gz.
File metadata
- Download URL: smara_langchain-0.1.0.tar.gz
- Upload date:
- Size: 5.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f0b712c1b3a665c21557cc7d026d622bc09cd7aa6299580f23f064e8cbb241e1
|
|
| MD5 |
45cf1e7f27d42be4c408f321beb76c2e
|
|
| BLAKE2b-256 |
ca3a449ff28bf0104e097d36a80c96691b4fefc1eb4c91cd6e70d8ab40ad04e3
|
File details
Details for the file smara_langchain-0.1.0-py3-none-any.whl.
File metadata
- Download URL: smara_langchain-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f3e7160d9c53c3b53ba38e94c5d5a13597a1ebee0217e27fedccbac7dcaa2a9
|
|
| MD5 |
25f06958560d4db40ddb5a0c61b0b601
|
|
| BLAKE2b-256 |
de881abd35171923ec3f8cc79dea278a08c438a98decdcfeb95844213e185074
|