LlamaIndex integration for the Dakera AI memory platform
Project description
llamaindex-dakera
Drop-in LlamaIndex components backed by Dakera — persistent agent memory and server-side vector indexing with no local embedding model.
DakeraMemoryStore gives your LlamaIndex agents conversation memory that survives restarts. DakeraIndexStore replaces local vector indices with Dakera's server-side embedding engine — no OpenAI embeddings API needed for RAG.
Quick Start
Step 1 — Run Dakera
Dakera is a self-hosted memory server. Spin it up with Docker:
docker run -d \
--name dakera \
-p 3300:3300 \
-e DAKERA_ROOT_API_KEY=dk-mykey \
ghcr.io/dakera-ai/dakera:latest
For a production setup with persistent storage, use Docker Compose:
# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
-o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d
# Verify it's running
curl http://localhost:3300/health
Full deployment guide: github.com/Dakera-AI/dakera-deploy
Step 2 — Install the integration
pip install llamaindex-dakera
Step 3 — Use it
from llama_index_dakera import DakeraMemoryStore, DakeraIndexStore
# Agent memory
memory = DakeraMemoryStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="my-agent",
)
# RAG index — no local embedding model needed
vector_store = DakeraIndexStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
namespace="my-docs",
)
Installation
pip install llamaindex-dakera
Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)
DakeraMemoryStore
Persistent conversation memory for LlamaIndex agents. Drop-in replacement for the default in-memory store.
Usage with a chat agent
from llama_index.core.agent import ReActAgent
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.llms.openai import OpenAI
from llama_index_dakera import DakeraMemoryStore
store = DakeraMemoryStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="react-agent",
)
memory = ChatMemoryBuffer.from_defaults(
token_limit=3000,
chat_store=store,
chat_store_key="user-1",
)
agent = ReActAgent.from_tools(
tools=[...],
llm=OpenAI(model="gpt-4o"),
memory=memory,
verbose=True,
)
# First session
response = agent.chat("My project is called NeuralBridge.")
print(response)
# Later session — memory persists
response = agent.chat("What's the name of my project?")
print(response) # "Your project is called NeuralBridge."
Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url |
str |
— | Dakera server URL |
api_key |
str |
"" |
Dakera API key |
agent_id |
str |
— | Namespace for this agent's memories |
top_k |
int |
5 |
Memories to retrieve per query |
min_importance |
float |
0.0 |
Minimum importance for recall |
DakeraIndexStore
Server-side embedded vector store for RAG. Dakera embeds documents on the server — no local model, no OpenAI embeddings API needed.
Indexing documents
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index_dakera import DakeraIndexStore
# Load documents
documents = SimpleDirectoryReader("./docs").load_data()
# Create index backed by Dakera
vector_store = DakeraIndexStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
namespace="product-docs",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents,
storage_context=storage_context,
)
Querying
query_engine = index.as_query_engine(similarity_top_k=4)
response = query_engine.query("How does the billing work?")
print(response)
Chat with your documents
from llama_index.core.chat_engine import CondensePlusContextChatEngine
from llama_index_dakera import DakeraIndexStore, DakeraMemoryStore
vector_store = DakeraIndexStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
namespace="product-docs",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_defaults(storage_context=storage_context)
memory_store = DakeraMemoryStore(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="doc-chat",
)
chat_engine = CondensePlusContextChatEngine.from_defaults(
retriever=index.as_retriever(similarity_top_k=4),
memory=ChatMemoryBuffer.from_defaults(chat_store=memory_store),
)
response = chat_engine.chat("What are the pricing tiers?")
print(response)
Options
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url |
str |
— | Dakera server URL |
api_key |
str |
"" |
Dakera API key |
namespace |
str |
— | Vector namespace to read/write |
embedding_model |
str |
namespace default | Server-side embedding model override |
Related packages
| Package | Framework | Language |
|---|---|---|
crewai-dakera |
CrewAI | Python |
langchain-dakera |
LangChain | Python |
autogen-dakera |
AutoGen | Python |
@dakera-ai/langchain |
LangChain.js | TypeScript |
Links
- Dakera Server — self-hosted memory server
- Dakera Python SDK — low-level API client
- Documentation
- All integrations
License
MIT © Dakera AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llamaindex_dakera-0.1.0.tar.gz.
File metadata
- Download URL: llamaindex_dakera-0.1.0.tar.gz
- Upload date:
- Size: 6.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
04a6d83eda30a8fb850d9c5a834236f4f62e96ff2816d9456c50a7c381488510
|
|
| MD5 |
09888101eb9dc674dac328de0c960eca
|
|
| BLAKE2b-256 |
e03387c3021162c5c9f020e4ab41fe56299ffd6e8ddb33c9768e7f7821af2ea5
|
Provenance
The following attestation bundles were made for llamaindex_dakera-0.1.0.tar.gz:
Publisher:
release.yml on Dakera-AI/dakera-llamaindex
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llamaindex_dakera-0.1.0.tar.gz -
Subject digest:
04a6d83eda30a8fb850d9c5a834236f4f62e96ff2816d9456c50a7c381488510 - Sigstore transparency entry: 1524422297
- Sigstore integration time:
-
Permalink:
Dakera-AI/dakera-llamaindex@6c50f6868e205cd228445ba39bf5ce13ee6538d3 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Dakera-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6c50f6868e205cd228445ba39bf5ce13ee6538d3 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file llamaindex_dakera-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llamaindex_dakera-0.1.0-py3-none-any.whl
- Upload date:
- Size: 5.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
723d7379529d9cceb49992654e6261c403ebb8ce50729d7bfb040d0f60360f0a
|
|
| MD5 |
f1f7439e74cdce24ea0d5233dc87d786
|
|
| BLAKE2b-256 |
ccc584a7668405136bb745d016b87aa774aeca4d5f029a92ff27a581cf2f51f3
|
Provenance
The following attestation bundles were made for llamaindex_dakera-0.1.0-py3-none-any.whl:
Publisher:
release.yml on Dakera-AI/dakera-llamaindex
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llamaindex_dakera-0.1.0-py3-none-any.whl -
Subject digest:
723d7379529d9cceb49992654e6261c403ebb8ce50729d7bfb040d0f60360f0a - Sigstore transparency entry: 1524422306
- Sigstore integration time:
-
Permalink:
Dakera-AI/dakera-llamaindex@6c50f6868e205cd228445ba39bf5ce13ee6538d3 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Dakera-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6c50f6868e205cd228445ba39bf5ce13ee6538d3 -
Trigger Event:
workflow_dispatch
-
Statement type: