Skip to main content

LangChain integration for the Dakera AI memory platform

Project description

langchain-dakera

CI PyPI Python License: MIT

Drop-in LangChain components backed by Dakera — persistent agent memory and server-side RAG with no local embedding model.

DakeraMemory gives your chains conversation memory that survives restarts. DakeraVectorStore powers RAG with Dakera's built-in embedding engine — no OpenAI embeddings needed.


Quick Start

Step 1 — Run Dakera

Dakera is a self-hosted memory server. Spin it up with Docker:

docker run -d \
  --name dakera \
  -p 3300:3300 \
  -e DAKERA_ROOT_API_KEY=dk-mykey \
  ghcr.io/dakera-ai/dakera:latest

For a production setup with persistent storage, use Docker Compose:

# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
  -o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d

# Verify it's running
curl http://localhost:3300/health

Full deployment guide: github.com/Dakera-AI/dakera-deploy

Step 2 — Install the integration

pip install langchain-dakera

Step 3 — Use it

from langchain_dakera import DakeraMemory, DakeraVectorStore

# Persistent conversation memory
memory = DakeraMemory(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="my-agent",
)

# RAG vector store — no local embedding model needed
vectorstore = DakeraVectorStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="my-docs",
)

Installation

pip install langchain-dakera

Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)


DakeraMemory

Persistent semantic memory for LangChain conversation chains. Stores and recalls conversation history using Dakera's hybrid search.

Usage

from langchain.chains import ConversationChain
from langchain_openai import ChatOpenAI
from langchain_dakera import DakeraMemory

memory = DakeraMemory(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="chat-agent",
    top_k=5,        # memories to recall per turn
    importance=0.7, # importance score for stored memories
)

chain = ConversationChain(
    llm=ChatOpenAI(model="gpt-4o"),
    memory=memory,
)

# First session
response = chain.predict(input="My name is Alice and I'm building a chatbot.")
print(response)

# Later session — memory persists across restarts
response = chain.predict(input="What was I building?")
print(response)  # "You mentioned you were building a chatbot."

Options

Parameter Type Default Description
api_url str Dakera server URL
api_key str "" Dakera API key
agent_id str Agent identifier for memory namespacing
top_k int 5 Memories to surface per turn
min_importance float 0.0 Minimum importance threshold for recall
importance float 0.7 Importance assigned to stored memories
memory_key str "history" Key injected into the prompt
input_key str first key Input key used as recall query

DakeraVectorStore

Server-side embedded vector store for RAG. Dakera handles embeddings — no OpenAI or Hugging Face API calls needed for indexing.

Indexing documents

from langchain_community.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_dakera import DakeraVectorStore

# Load and split documents
loader = DirectoryLoader("./docs", glob="**/*.md")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
chunks = splitter.split_documents(docs)

# Index into Dakera (server handles embedding)
vectorstore = DakeraVectorStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="my-docs",
)
vectorstore.add_documents(chunks)

RAG chain

from langchain.chains import RetrievalQA
from langchain_openai import ChatOpenAI
from langchain_dakera import DakeraVectorStore

vectorstore = DakeraVectorStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="my-docs",
)

qa_chain = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-4o"),
    retriever=vectorstore.as_retriever(search_kwargs={"k": 4}),
)

answer = qa_chain.run("How does Dakera handle memory decay?")
print(answer)

Options

Parameter Type Default Description
api_url str Dakera server URL
api_key str "" Dakera API key
namespace str Vector namespace to read/write
embedding_model str namespace default Server-side embedding model override

Related packages

Package Framework Language
crewai-dakera CrewAI Python
llamaindex-dakera LlamaIndex Python
autogen-dakera AutoGen Python
@dakera-ai/langchain LangChain.js TypeScript

Links


License

MIT © Dakera AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dakera-0.1.0.tar.gz (8.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dakera-0.1.0-py3-none-any.whl (6.3 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dakera-0.1.0.tar.gz.

File metadata

  • Download URL: langchain_dakera-0.1.0.tar.gz
  • Upload date:
  • Size: 8.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for langchain_dakera-0.1.0.tar.gz
Algorithm Hash digest
SHA256 711d1378f886e4d35e74a2ec0f58821e93e69be7b7d49787986ccc3e7d97813f
MD5 62012a2fc45f98bbccfb0e144b44fbbc
BLAKE2b-256 9692f21f35b409178a54bce2a63df5e0e2577fdadbdacf4ca9c964955a346875

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_dakera-0.1.0.tar.gz:

Publisher: release.yml on Dakera-AI/dakera-langchain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file langchain_dakera-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_dakera-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cff83e3f93c639e162201c9198bbc16d842b9e3d1d4d4ef79f22f548abf28088
MD5 3ebc73ba1ff76cf6b0aa044267a4b5f7
BLAKE2b-256 7efd323f20e9471efac3ab1a170ccccd076f3a681e10fd7a9dea16a639c1ff84

See more details on using hashes here.

Provenance

The following attestation bundles were made for langchain_dakera-0.1.0-py3-none-any.whl:

Publisher: release.yml on Dakera-AI/dakera-langchain

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page