AutoGen integration for the Dakera AI memory platform
Project description
autogen-dakera
Persistent, semantically-recalled memory for AutoGen agents, powered by Dakera.
Your AutoGen agents remember everything — across sessions, across restarts. Dakera handles embedding, storage, and retrieval server-side with no local model required.
Quick Start
Step 1 — Run Dakera
Dakera is a self-hosted memory server. Spin it up with Docker:
docker run -d \
--name dakera \
-p 3300:3300 \
-e DAKERA_ROOT_API_KEY=dk-mykey \
ghcr.io/dakera-ai/dakera:latest
For a production setup with persistent storage, use Docker Compose:
# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
-o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d
# Verify it's running
curl http://localhost:3300/health
Full deployment guide: github.com/Dakera-AI/dakera-deploy
Step 2 — Install the integration
pip install autogen-dakera
Step 3 — Add memory to your agent
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_dakera import DakeraMemory
memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="my-agent",
)
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
memory=[memory],
)
# Agent now persists what it learns across sessions
Installation
# Core + integration
pip install autogen-dakera
# With AutoGen (if not already installed)
pip install "autogen-dakera[autogen]"
Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
api_url |
str |
— | Dakera server URL (e.g. http://localhost:3300) |
api_key |
str |
"" |
API key set via DAKERA_ROOT_API_KEY |
agent_id |
str |
— | Logical identifier for this agent's memory |
min_importance |
float |
0.0 |
Minimum importance score for recalled memories |
top_k |
int |
5 |
Number of memories to surface per query |
Examples
Multi-agent team with shared memory
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_dakera import DakeraMemory
async def main():
shared_memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="research-team",
top_k=8,
)
model_client = OpenAIChatCompletionClient(model="gpt-4o")
researcher = AssistantAgent(
name="researcher",
model_client=model_client,
memory=[shared_memory],
system_message="You are a research expert. Remember key findings.",
)
analyst = AssistantAgent(
name="analyst",
model_client=model_client,
memory=[shared_memory],
system_message="You are a data analyst. Build on what the researcher found.",
)
team = RoundRobinGroupChat(
[researcher, analyst],
termination_condition=MaxMessageTermination(max_messages=6),
)
# First session — agents learn and store
result = await team.run(task="Research AI memory architectures")
print(result.messages[-1].content)
# Later session — agents recall prior research
result = await team.run(task="What do we know about transformer memory?")
print(result.messages[-1].content)
asyncio.run(main())
Filtering memories by importance
memory = DakeraMemory(
api_url="http://localhost:3300",
api_key="dk-mykey",
agent_id="my-agent",
min_importance=0.7, # only surface high-quality memories
top_k=3,
)
How it works
- During conversation, AutoGen calls
DakeraMemory.add()with new messages - Dakera embeds the content server-side and stores it with a semantic vector
- Before each agent response, AutoGen calls
DakeraMemory.query()— Dakera performs hybrid search and returns the most relevant past memories - Memories are injected into the agent's context automatically
Related packages
| Package | Framework | Language |
|---|---|---|
crewai-dakera |
CrewAI | Python |
langchain-dakera |
LangChain | Python |
llamaindex-dakera |
LlamaIndex | Python |
@dakera-ai/langchain |
LangChain.js | TypeScript |
Links
- Dakera Server — self-hosted memory server
- Dakera Python SDK — low-level API client
- Documentation
- All integrations
License
MIT © Dakera AI
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file autogen_dakera-0.1.0.tar.gz.
File metadata
- Download URL: autogen_dakera-0.1.0.tar.gz
- Upload date:
- Size: 5.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fa294eb360ecb33276283e1b394ada5487899bd26e5579f941f059f42808505b
|
|
| MD5 |
8b67cb053e27fb7717885c59638e67a3
|
|
| BLAKE2b-256 |
5e37c267967c8261bf05fde6f0e381fbf5e9f41b1f8837ce629bce490f651370
|
Provenance
The following attestation bundles were made for autogen_dakera-0.1.0.tar.gz:
Publisher:
release.yml on Dakera-AI/dakera-autogen
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
autogen_dakera-0.1.0.tar.gz -
Subject digest:
fa294eb360ecb33276283e1b394ada5487899bd26e5579f941f059f42808505b - Sigstore transparency entry: 1524416673
- Sigstore integration time:
-
Permalink:
Dakera-AI/dakera-autogen@8279df53edf3d80a5fd3b0107c72b701f46d9580 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Dakera-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@8279df53edf3d80a5fd3b0107c72b701f46d9580 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file autogen_dakera-0.1.0-py3-none-any.whl.
File metadata
- Download URL: autogen_dakera-0.1.0-py3-none-any.whl
- Upload date:
- Size: 4.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9ef0c2aa7172596eaf3c5472014f8a35f753e424af3679ab311412915ee13dac
|
|
| MD5 |
5be848df98740f7bc8e03bcecb5db332
|
|
| BLAKE2b-256 |
35322d49d0bbc7252563ce77e75784dd05dac1db58b38c0e35f1cfd5560349ea
|
Provenance
The following attestation bundles were made for autogen_dakera-0.1.0-py3-none-any.whl:
Publisher:
release.yml on Dakera-AI/dakera-autogen
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
autogen_dakera-0.1.0-py3-none-any.whl -
Subject digest:
9ef0c2aa7172596eaf3c5472014f8a35f753e424af3679ab311412915ee13dac - Sigstore transparency entry: 1524416681
- Sigstore integration time:
-
Permalink:
Dakera-AI/dakera-autogen@8279df53edf3d80a5fd3b0107c72b701f46d9580 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Dakera-AI
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@8279df53edf3d80a5fd3b0107c72b701f46d9580 -
Trigger Event:
workflow_dispatch
-
Statement type: