Skip to main content

LlamaIndex integration for the Dakera AI memory platform

Project description

llamaindex-dakera

CI PyPI Python License: MIT

Drop-in LlamaIndex components backed by Dakera — persistent agent memory and server-side vector indexing with no local embedding model.

DakeraMemoryStore gives your LlamaIndex agents conversation memory that survives restarts. DakeraIndexStore replaces local vector indices with Dakera's server-side embedding engine — no OpenAI embeddings API needed for RAG.


Quick Start

Step 1 — Run Dakera

Dakera is a self-hosted memory server. Spin it up with Docker:

docker run -d \
  --name dakera \
  -p 3300:3300 \
  -e DAKERA_ROOT_API_KEY=dk-mykey \
  ghcr.io/dakera-ai/dakera:latest

For a production setup with persistent storage, use Docker Compose:

# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
  -o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d

# Verify it's running
curl http://localhost:3300/health

Full deployment guide: github.com/Dakera-AI/dakera-deploy

Step 2 — Install the integration

pip install llamaindex-dakera

Step 3 — Use it

from llama_index_dakera import DakeraMemoryStore, DakeraIndexStore

# Agent memory
memory = DakeraMemoryStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="my-agent",
)

# RAG index — no local embedding model needed
vector_store = DakeraIndexStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="my-docs",
)

Installation

pip install llamaindex-dakera

Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)


DakeraMemoryStore

Persistent conversation memory for LlamaIndex agents. Drop-in replacement for the default in-memory store.

Usage with a chat agent

from llama_index.core.agent import ReActAgent
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.llms.openai import OpenAI
from llama_index_dakera import DakeraMemoryStore

store = DakeraMemoryStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="react-agent",
)

memory = ChatMemoryBuffer.from_defaults(
    token_limit=3000,
    chat_store=store,
    chat_store_key="user-1",
)

agent = ReActAgent.from_tools(
    tools=[...],
    llm=OpenAI(model="gpt-4o"),
    memory=memory,
    verbose=True,
)

# First session
response = agent.chat("My project is called NeuralBridge.")
print(response)

# Later session — memory persists
response = agent.chat("What's the name of my project?")
print(response)  # "Your project is called NeuralBridge."

Options

Parameter Type Default Description
api_url str Dakera server URL
api_key str "" Dakera API key
agent_id str Namespace for this agent's memories
top_k int 5 Memories to retrieve per query
min_importance float 0.0 Minimum importance for recall

DakeraIndexStore

Server-side embedded vector store for RAG. Dakera embeds documents on the server — no local model, no OpenAI embeddings API needed.

Indexing documents

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index_dakera import DakeraIndexStore

# Load documents
documents = SimpleDirectoryReader("./docs").load_data()

# Create index backed by Dakera
vector_store = DakeraIndexStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="product-docs",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context,
)

Querying

query_engine = index.as_query_engine(similarity_top_k=4)
response = query_engine.query("How does the billing work?")
print(response)

Chat with your documents

from llama_index.core.chat_engine import CondensePlusContextChatEngine
from llama_index_dakera import DakeraIndexStore, DakeraMemoryStore

vector_store = DakeraIndexStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="product-docs",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_defaults(storage_context=storage_context)

memory_store = DakeraMemoryStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="doc-chat",
)

chat_engine = CondensePlusContextChatEngine.from_defaults(
    retriever=index.as_retriever(similarity_top_k=4),
    memory=ChatMemoryBuffer.from_defaults(chat_store=memory_store),
)

response = chat_engine.chat("What are the pricing tiers?")
print(response)

Options

Parameter Type Default Description
api_url str Dakera server URL
api_key str "" Dakera API key
namespace str Vector namespace to read/write
embedding_model str namespace default Server-side embedding model override

Related packages

Package Framework Language
crewai-dakera CrewAI Python
langchain-dakera LangChain Python
autogen-dakera AutoGen Python
@dakera-ai/langchain LangChain.js TypeScript

Links


License

MIT © Dakera AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llamaindex_dakera-0.1.0.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llamaindex_dakera-0.1.0-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file llamaindex_dakera-0.1.0.tar.gz.

File metadata

  • Download URL: llamaindex_dakera-0.1.0.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llamaindex_dakera-0.1.0.tar.gz
Algorithm Hash digest
SHA256 04a6d83eda30a8fb850d9c5a834236f4f62e96ff2816d9456c50a7c381488510
MD5 09888101eb9dc674dac328de0c960eca
BLAKE2b-256 e03387c3021162c5c9f020e4ab41fe56299ffd6e8ddb33c9768e7f7821af2ea5

See more details on using hashes here.

Provenance

The following attestation bundles were made for llamaindex_dakera-0.1.0.tar.gz:

Publisher: release.yml on Dakera-AI/dakera-llamaindex

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llamaindex_dakera-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llamaindex_dakera-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 723d7379529d9cceb49992654e6261c403ebb8ce50729d7bfb040d0f60360f0a
MD5 f1f7439e74cdce24ea0d5233dc87d786
BLAKE2b-256 ccc584a7668405136bb745d016b87aa774aeca4d5f029a92ff27a581cf2f51f3

See more details on using hashes here.

Provenance

The following attestation bundles were made for llamaindex_dakera-0.1.0-py3-none-any.whl:

Publisher: release.yml on Dakera-AI/dakera-llamaindex

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page