Skip to main content

LlamaIndex integration for the Dakera AI memory platform

Project description

llamaindex-dakera

CI PyPI Python License: MIT

Drop-in LlamaIndex components backed by Dakera — persistent agent memory and server-side vector indexing with no local embedding model.

DakeraMemoryStore gives your LlamaIndex agents conversation memory that survives restarts. DakeraIndexStore replaces local vector indices with Dakera's server-side embedding engine — no OpenAI embeddings API needed for RAG.


Quick Start

Step 1 — Run Dakera

Dakera is a self-hosted memory server. Spin it up with Docker:

docker run -d \
  --name dakera \
  -p 3300:3300 \
  -e DAKERA_ROOT_API_KEY=dk-mykey \
  ghcr.io/dakera-ai/dakera:latest

For a production setup with persistent storage, use Docker Compose:

# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
  -o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d

# Verify it's running
curl http://localhost:3300/health

Full deployment guide: github.com/Dakera-AI/dakera-deploy

Step 2 — Install the integration

pip install llamaindex-dakera

Step 3 — Use it

from llama_index_dakera import DakeraMemoryStore, DakeraIndexStore

# Agent memory
memory = DakeraMemoryStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="my-agent",
)

# RAG index — no local embedding model needed
vector_store = DakeraIndexStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="my-docs",
)

Installation

pip install llamaindex-dakera

Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)


DakeraMemoryStore

Persistent conversation memory for LlamaIndex agents. Drop-in replacement for the default in-memory store.

Usage with a chat agent

from llama_index.core.agent import ReActAgent
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.llms.openai import OpenAI
from llama_index_dakera import DakeraMemoryStore

store = DakeraMemoryStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="react-agent",
)

memory = ChatMemoryBuffer.from_defaults(
    token_limit=3000,
    chat_store=store,
    chat_store_key="user-1",
)

agent = ReActAgent.from_tools(
    tools=[...],
    llm=OpenAI(model="gpt-4o"),
    memory=memory,
    verbose=True,
)

# First session
response = agent.chat("My project is called NeuralBridge.")
print(response)

# Later session — memory persists
response = agent.chat("What's the name of my project?")
print(response)  # "Your project is called NeuralBridge."

Options

Parameter Type Default Description
api_url str Dakera server URL
api_key str "" Dakera API key
agent_id str Namespace for this agent's memories
top_k int 5 Memories to retrieve per query
min_importance float 0.0 Minimum importance for recall

DakeraIndexStore

Server-side embedded vector store for RAG. Dakera embeds documents on the server — no local model, no OpenAI embeddings API needed.

Indexing documents

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, StorageContext
from llama_index_dakera import DakeraIndexStore

# Load documents
documents = SimpleDirectoryReader("./docs").load_data()

# Create index backed by Dakera
vector_store = DakeraIndexStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="product-docs",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)

index = VectorStoreIndex.from_documents(
    documents,
    storage_context=storage_context,
)

Querying

query_engine = index.as_query_engine(similarity_top_k=4)
response = query_engine.query("How does the billing work?")
print(response)

Chat with your documents

from llama_index.core.chat_engine import CondensePlusContextChatEngine
from llama_index_dakera import DakeraIndexStore, DakeraMemoryStore

vector_store = DakeraIndexStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    namespace="product-docs",
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_defaults(storage_context=storage_context)

memory_store = DakeraMemoryStore(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="doc-chat",
)

chat_engine = CondensePlusContextChatEngine.from_defaults(
    retriever=index.as_retriever(similarity_top_k=4),
    memory=ChatMemoryBuffer.from_defaults(chat_store=memory_store),
)

response = chat_engine.chat("What are the pricing tiers?")
print(response)

Options

Parameter Type Default Description
api_url str Dakera server URL
api_key str "" Dakera API key
namespace str Vector namespace to read/write
embedding_model str namespace default Server-side embedding model override

Related packages

Package Framework Language
crewai-dakera CrewAI Python
langchain-dakera LangChain Python
autogen-dakera AutoGen Python
@dakera-ai/langchain LangChain.js TypeScript

Links


License

MIT © Dakera AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llamaindex_dakera-0.1.1.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llamaindex_dakera-0.1.1-py3-none-any.whl (6.0 kB view details)

Uploaded Python 3

File details

Details for the file llamaindex_dakera-0.1.1.tar.gz.

File metadata

  • Download URL: llamaindex_dakera-0.1.1.tar.gz
  • Upload date:
  • Size: 8.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llamaindex_dakera-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d18c9b5946bb481658d7f666807e24d946646132c0aab298c16404cfba32e5a4
MD5 d30c3038a8ac614a37ce1f4443fad610
BLAKE2b-256 3562464c0879b8d718d1b37cc118d52a9ebbfe47ab1c586319b3853b54af1eea

See more details on using hashes here.

Provenance

The following attestation bundles were made for llamaindex_dakera-0.1.1.tar.gz:

Publisher: release.yml on Dakera-AI/dakera-llamaindex

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llamaindex_dakera-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llamaindex_dakera-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cf82aef10ff04a2d6f8b7fec550d8cdb835be93d122db057dda1b663b44c969f
MD5 e154215fbcddc2377d01a7cd5ee9bcbc
BLAKE2b-256 31bd2b72873f357280baa73bcc6fbbe4eb283495a167d2f2c2b042d757f01435

See more details on using hashes here.

Provenance

The following attestation bundles were made for llamaindex_dakera-0.1.1-py3-none-any.whl:

Publisher: release.yml on Dakera-AI/dakera-llamaindex

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page