Skip to main content

AutoGen integration for the Dakera AI memory platform

Project description

autogen-dakera

CI PyPI Python License: MIT

Persistent, semantically-recalled memory for AutoGen agents, powered by Dakera.

Your AutoGen agents remember everything — across sessions, across restarts. Dakera handles embedding, storage, and retrieval server-side with no local model required.


Quick Start

Step 1 — Run Dakera

Dakera is a self-hosted memory server. Spin it up with Docker:

docker run -d \
  --name dakera \
  -p 3300:3300 \
  -e DAKERA_ROOT_API_KEY=dk-mykey \
  ghcr.io/dakera-ai/dakera:latest

For a production setup with persistent storage, use Docker Compose:

# Download and start
curl -sSfL https://raw.githubusercontent.com/Dakera-AI/dakera-deploy/main/docker-compose.yml \
  -o docker-compose.yml
DAKERA_API_KEY=dk-mykey docker compose up -d

# Verify it's running
curl http://localhost:3300/health

Full deployment guide: github.com/Dakera-AI/dakera-deploy

Step 2 — Install the integration

pip install autogen-dakera

Step 3 — Add memory to your agent

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_dakera import DakeraMemory

memory = DakeraMemory(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="my-agent",
)

model_client = OpenAIChatCompletionClient(model="gpt-4o")

agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    memory=[memory],
)

# Agent now persists what it learns across sessions

Installation

# Core + integration
pip install autogen-dakera

# With AutoGen (if not already installed)
pip install "autogen-dakera[autogen]"

Requirements: Python ≥ 3.10, a running Dakera server (see Step 1 above)


Configuration

Parameter Type Default Description
api_url str Dakera server URL (e.g. http://localhost:3300)
api_key str "" API key set via DAKERA_ROOT_API_KEY
agent_id str Logical identifier for this agent's memory
min_importance float 0.0 Minimum importance score for recalled memories
top_k int 5 Number of memories to surface per query

Examples

Multi-agent team with shared memory

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_dakera import DakeraMemory

async def main():
    shared_memory = DakeraMemory(
        api_url="http://localhost:3300",
        api_key="dk-mykey",
        agent_id="research-team",
        top_k=8,
    )

    model_client = OpenAIChatCompletionClient(model="gpt-4o")

    researcher = AssistantAgent(
        name="researcher",
        model_client=model_client,
        memory=[shared_memory],
        system_message="You are a research expert. Remember key findings.",
    )

    analyst = AssistantAgent(
        name="analyst",
        model_client=model_client,
        memory=[shared_memory],
        system_message="You are a data analyst. Build on what the researcher found.",
    )

    team = RoundRobinGroupChat(
        [researcher, analyst],
        termination_condition=MaxMessageTermination(max_messages=6),
    )

    # First session — agents learn and store
    result = await team.run(task="Research AI memory architectures")
    print(result.messages[-1].content)

    # Later session — agents recall prior research
    result = await team.run(task="What do we know about transformer memory?")
    print(result.messages[-1].content)

asyncio.run(main())

Filtering memories by importance

memory = DakeraMemory(
    api_url="http://localhost:3300",
    api_key="dk-mykey",
    agent_id="my-agent",
    min_importance=0.7,  # only surface high-quality memories
    top_k=3,
)

How it works

  1. During conversation, AutoGen calls DakeraMemory.add() with new messages
  2. Dakera embeds the content server-side and stores it with a semantic vector
  3. Before each agent response, AutoGen calls DakeraMemory.query() — Dakera performs hybrid search and returns the most relevant past memories
  4. Memories are injected into the agent's context automatically

Related packages

Package Framework Language
crewai-dakera CrewAI Python
langchain-dakera LangChain Python
llamaindex-dakera LlamaIndex Python
@dakera-ai/langchain LangChain.js TypeScript

Links


License

MIT © Dakera AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autogen_dakera-0.1.0.tar.gz (5.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autogen_dakera-0.1.0-py3-none-any.whl (4.6 kB view details)

Uploaded Python 3

File details

Details for the file autogen_dakera-0.1.0.tar.gz.

File metadata

  • Download URL: autogen_dakera-0.1.0.tar.gz
  • Upload date:
  • Size: 5.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for autogen_dakera-0.1.0.tar.gz
Algorithm Hash digest
SHA256 fa294eb360ecb33276283e1b394ada5487899bd26e5579f941f059f42808505b
MD5 8b67cb053e27fb7717885c59638e67a3
BLAKE2b-256 5e37c267967c8261bf05fde6f0e381fbf5e9f41b1f8837ce629bce490f651370

See more details on using hashes here.

Provenance

The following attestation bundles were made for autogen_dakera-0.1.0.tar.gz:

Publisher: release.yml on Dakera-AI/dakera-autogen

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file autogen_dakera-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: autogen_dakera-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 4.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for autogen_dakera-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9ef0c2aa7172596eaf3c5472014f8a35f753e424af3679ab311412915ee13dac
MD5 5be848df98740f7bc8e03bcecb5db332
BLAKE2b-256 35322d49d0bbc7252563ce77e75784dd05dac1db58b38c0e35f1cfd5560349ea

See more details on using hashes here.

Provenance

The following attestation bundles were made for autogen_dakera-0.1.0-py3-none-any.whl:

Publisher: release.yml on Dakera-AI/dakera-autogen

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page