LangChain integration for the Dakera AI memory platform
Project description
langchain-dakera
Drop-in LangChain components backed by Dakera — persistent agent memory and server-side RAG with no local embedding model.
DakeraMemory gives your chains conversation memory that survives restarts. DakeraVectorStore powers RAG with Dakera's built-in embedding engine — no OpenAI embeddings needed.
Quick Start
Step 1 — Run Dakera
Dakera is a self-hosted memory server. Spin it up with Docker:
docker run -d \
--name dakera \
-p 3300:3300 \
-e DAKERA_ROOT_API_KEY=dk-mykey \
ghcr.io/dakera-ai/dakera:latest
For a production setup with persistent storage, use Docker Compose:
# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
-o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d
# Verify it's running
curl http://localhost:3300/health
Full deployment guide: github.com/Dakera-AI/dakera-deploy
Step 2 — Install the integration
pip install langchain-dakera
Step 3 — Use it
from langchain_dakera import DakeraMemory, DakeraVectorStore
# Persistent conversation memory
memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="my-agent",
)
# RAG vector store — no local embedding model needed
vectorstore = DakeraVectorStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
namespace="my-docs",
)
Installation
pip install langchain-dakera
Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)
DakeraMemory
Persistent semantic memory for LangChain conversation chains. Stores and recalls conversation history using Dakera's hybrid search.
Usage
from langchain.chains import ConversationChain
from langchain_openai import ChatOpenAI
from langchain_dakera import DakeraMemory
memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="chat-agent",
recall_k=5, # memories to recall per turn
importance=0.7, # importance score for stored memories
)
chain = ConversationChain(
llm=ChatOpenAI(model="gpt-4o"),
memory=memory,
)
# First session
response = chain.predict(input="My name is Alice and I'm building a chatbot.")
print(response)
# Later session — memory persists across restarts
response = chain.predict(input="What was I building?")
print(response) # "You mentioned you were building a chatbot."
Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url |
str |
— | Dakera server URL |
api_key |
str |
"" |
Dakera API key |
agent_id |
str |
— | Agent identifier for memory namespacing |
recall_k |
int |
5 |
Memories to surface per turn |
min_importance |
float |
0.0 |
Minimum importance threshold for recall |
importance |
float |
0.7 |
Importance assigned to stored memories |
memory_key |
str |
"history" |
Key injected into the prompt |
input_key |
str |
first key | Input key used as recall query |
DakeraVectorStore
Server-side embedded vector store for RAG. Dakera handles embeddings — no OpenAI or Hugging Face API calls needed for indexing.
Indexing documents
from langchain_community.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_dakera import DakeraVectorStore
# Load and split documents
loader = DirectoryLoader("./docs", glob="**/*.md")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = splitter.split_documents(docs)
# Index into Dakera (server handles embedding)
vectorstore = DakeraVectorStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
namespace="my-docs",
)
vectorstore.add_documents(chunks)
RAG chain
from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
from langchain_dakera import DakeraVectorStore
vectorstore = DakeraVectorStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
namespace="my-docs",
)
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4o"),
retriever=vectorstore.as_retriever(search_kwargs={"k": 4}),
)
answer = qa_chain.run("How does Dakera handle memory decay?")
print(answer)
Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url |
str |
— | Dakera server URL |
api_key |
str |
"" |
Dakera API key |
namespace |
str |
— | Vector namespace to read/write |
embedding_model |
str |
namespace default | Server-side embedding model override |
Related packages
| Package | Framework | Language |
|---|---|---|
crewai-dakera |
CrewAI | Python |
llamaindex-dakera |
LlamaIndex | Python |
autogen-dakera |
AutoGen | Python |
@dakera-ai/langchain |
LangChain.js | TypeScript |
Links
- Dakera Server — self-hosted memory server
- Dakera Python SDK — low-level API client
- Documentation
- All integrations
License
MIT © Dakera AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_dakera-0.1.1.tar.gz.
File metadata
- Download URL: langchain_dakera-0.1.1.tar.gz
- Upload date:
- Size: 8.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
07b0850e2afb76f3958c5d534ba45a6ec05b5a1d92fb36a5b69e46ab2a0226ed
|
|
| MD5 |
719f431047da8b78da2d32795315f32e
|
|
| BLAKE2b-256 |
ce254c0a5924f99104796edb1e1982f6811fba8bca77d5c599bb2f34305e19b9
|
Provenance
The following attestation bundles were made for langchain_dakera-0.1.1.tar.gz:
Publisher:
release.yml on Dakera-AI/dakera-langchain
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langchain_dakera-0.1.1.tar.gz -
Subject digest:
07b0850e2afb76f3958c5d534ba45a6ec05b5a1d92fb36a5b69e46ab2a0226ed - Sigstore transparency entry: 1524917253
- Sigstore integration time:
-
Permalink:
Dakera-AI/dakera-langchain@2a5468fd8e22ef81ec75846cf803438fa311c63e -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/Dakera-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@2a5468fd8e22ef81ec75846cf803438fa311c63e -
Trigger Event:
release
-
Statement type:
File details
Details for the file langchain_dakera-0.1.1-py3-none-any.whl.
File metadata
- Download URL: langchain_dakera-0.1.1-py3-none-any.whl
- Upload date:
- Size: 6.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3846993bada6f292cfbcb95394bc31474f4f2a21ff0c0682902786ef21b9dee9
|
|
| MD5 |
b430b6cb3291c4a31387335882f71127
|
|
| BLAKE2b-256 |
f7ffcb8ed1a7249afe76488917cb66b465581ebb9e781871f16f6c9869d998f3
|
Provenance
The following attestation bundles were made for langchain_dakera-0.1.1-py3-none-any.whl:
Publisher:
release.yml on Dakera-AI/dakera-langchain
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
langchain_dakera-0.1.1-py3-none-any.whl -
Subject digest:
3846993bada6f292cfbcb95394bc31474f4f2a21ff0c0682902786ef21b9dee9 - Sigstore transparency entry: 1524917271
- Sigstore integration time:
-
Permalink:
Dakera-AI/dakera-langchain@2a5468fd8e22ef81ec75846cf803438fa311c63e -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/Dakera-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@2a5468fd8e22ef81ec75846cf803438fa311c63e -
Trigger Event:
release
-
Statement type: