Skip to main content

Microsoft AutoGen integration for MCAL - Goal-aware memory for multi-agent systems

Project description

mcal-ai-autogen

Microsoft AutoGen integration for MCAL (Memory-Context Alignment Layer), bringing goal-aware memory to AutoGen agents.

Installation

pip install mcal-ai-autogen

# With AutoGen dependencies
pip install mcal-ai-autogen[autogen]

Quick Start

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from mcal import MCAL
from mcal_autogen import MCALMemory

# Initialize MCAL
mcal = MCAL(llm_provider="openai")

# Create MCAL-backed memory
memory = MCALMemory(mcal, user_id="user_123")

# Create an agent with MCAL memory
model_client = OpenAIChatCompletionClient(model="gpt-4")
agent = AssistantAgent(
    name="data_engineer",
    model_client=model_client,
    memory=[memory],
    system_message="You are a helpful data engineering assistant.",
)

# Use the agent — MCAL automatically tracks context and decisions
result = await agent.run(task="How should I set up my ETL pipeline?")

What's New in 0.5.0

  • Query-Aware Subgraph Retrieval — New seed-and-expand pipeline replaces 6 query-blind retrieval paths with a single query-aware pass. Reduces context tokens by 53% at 1020 turns while improving DRR by 4.5pp.
  • QuerySubgraph dataclass — New public API for structured subgraph results, partitioned by node type (goals, decisions, facts, entities, actions) with structural edge resolution.
  • Adjacency index — Lazy-built bidirectional adjacency index on UnifiedGraph enables O(1) neighbor lookups for graph traversal.
  • Improved DRR at scale — CTO-1020 DRR improved from 85.3% to 89.9% (+4.6pp); CTO-300 improved from 92.2% to 94.4% (+2.2pp).
  • LoCoMo-10 Evaluation — Full 10-conversation, 1,540 QA binary evaluation: 46.1% overall accuracy.

What's New in 0.4.1

  • First-Class FACT Nodes — 3 new typed edges (measures, evidence_for, quantifies) improve fact retrieval; quantitative queries automatically boost fact content
  • Importance Scoring Boost — FACT nodes with numeric values score higher in retrieval
  • search_facts() API — Filter facts by category and value range on UnifiedGraph
  • Version Metadata Fix__version__ now correctly reports 0.4.1 (was stuck at 0.2.9)

What's New in 0.4.0

  • Graph Compaction Fixes — Improved retrieval quality with facts-in-context, expanded edge types, chunk boost scoring
  • CTO-1020 Benchmark — 85.3% decision retention over 1020 turns, 95.6% cross-era recall, 88% token reduction
  • Statistical Rigor — Multi-run validation with Fisher's exact test, Wilson score confidence intervals

What's New in 0.3.0

  • Expanded Relationship Edge Types — 10 new edge types (family, friend, colleague, likes, prefers, lives_in, works_at, etc.) for richer relationship graphs
  • Key Facts & Entities in Search Contextsearch() now surfaces extracted facts and background entities directly in result.context
  • Improved Chunk Retrieval — More results returned with equal weighting; conversation excerpts prioritized in context
Older releases

What's New in 0.2.9

  • Configurable Extraction Profiles — Choose decision, conversational, or comprehensive
  • Hybrid Retrieval with ChunkStore — Graph traversal + embedding search for maximum recall
  • FACT/PERSON Node Protection — Graph compaction preserves factual and identity nodes
# Pass extraction options to MCAL
mcal = MCAL(
    llm_provider="anthropic",
    extraction_profile="decision",
    enable_chunk_store=True,
)
memory = MCALMemory(mcal, user_id="user_123")

Features

Goal-Aware Memory

MCAL's unique value is understanding your project's goals and maintaining context across conversations:

mcal = MCAL(llm_provider="anthropic")
memory = MCALMemory(mcal)

# Add relevant context
from autogen_core.memory import MemoryContent
await memory.add(MemoryContent(
    content="We decided to use Kafka for streaming",
    mime_type="text/plain",
    metadata={"category": "architecture", "decision": True}
))

# Query returns goal-relevant results
results = await memory.query("What messaging system should I use?")
# Returns Kafka decision with goal-relevance scoring

Decision Tracking

Track architectural and project decisions automatically:

memory = MCALMemory(
    mcal,
    enable_goal_tracking=True,  # Extract goals from content
    include_decisions=True,      # Include decisions in search
)

# Decisions are automatically tracked
await memory.add(MemoryContent(
    content="After evaluating options, we chose PostgreSQL for its JSON support",
    mime_type="text/plain"
))

# Query finds relevant decisions
results = await memory.query("database selection")

User Isolation

Support multi-tenant scenarios with user isolation:

# Create separate memories for different users
user1_memory = MCALMemory(mcal, user_id="alice")
user2_memory = MCALMemory(mcal, user_id="bob")

# Each user has isolated memory
await user1_memory.add(MemoryContent(content="Alice prefers Python"))
await user2_memory.add(MemoryContent(content="Bob prefers Rust"))

# Queries only return user-specific results
results = await user1_memory.query("language preference")
# Only returns Alice's preference

TTL Support

Configure time-to-live for memory entries:

memory = MCALMemory(mcal, default_ttl_minutes=60)  # 1 hour default

# Or per-entry TTL via metadata
await memory.add(MemoryContent(
    content="Temporary context",
    mime_type="text/plain",
    metadata={"ttl_minutes": 15}  # 15 minute TTL
))

Thread Safety

All operations are protected by RLock — safe for concurrent access from multiple agents.

Integration with AutoGen Features

With AssistantAgent

from autogen_agentchat.agents import AssistantAgent

agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    memory=[memory],  # MCAL memory integrates seamlessly
)

With Teams

from autogen_agentchat.teams import RoundRobinGroupChat

# Share MCAL memory across team members
shared_memory = MCALMemory(mcal, user_id="team_alpha")

coder = AssistantAgent("coder", model_client=model_client, memory=[shared_memory])
reviewer = AssistantAgent("reviewer", model_client=model_client, memory=[shared_memory])

team = RoundRobinGroupChat([coder, reviewer])

Context Window Management

MCAL automatically manages context relevance:

memory = MCALMemory(
    mcal,
    max_results=10,           # Limit results per query
    score_threshold=0.5,      # Minimum relevance score
)

# update_context adds relevant memories to the agent's context
result = await memory.update_context(model_context)

API Reference

MCALMemory

class MCALMemory(Memory):
    def __init__(
        self,
        mcal: MCAL,
        user_id: str = "default",
        name: str = "mcal_memory",
        max_results: int = 10,
        score_threshold: float = 0.0,
        default_ttl_minutes: Optional[float] = None,
        enable_goal_tracking: bool = True,
        include_decisions: bool = True,
    ): ...

Key Methods

Method Async Description
add(content) Add MemoryContent to memory
query(query) Search for relevant memories, returns MemoryQueryResult
update_context(model_context) Update agent context with relevant memories
clear() Clear all memory entries
close() Cleanup resources

Helper Methods

Method Description
add_text(text, metadata=None) Convenience wrapper for adding plain text
query_text(query) Convenience wrapper returning list of strings
item_count Property returning number of stored items
get_all_items() Return all non-expired memory items

Requirements

  • Python >= 3.11
  • mcal-ai >= 0.2.0
  • autogen-core >= 0.4.0 (optional — gracefully degrades if not installed)
  • autogen-agentchat >= 0.4.0 (optional)

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcal_ai_autogen-0.5.0.tar.gz (15.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcal_ai_autogen-0.5.0-py3-none-any.whl (11.2 kB view details)

Uploaded Python 3

File details

Details for the file mcal_ai_autogen-0.5.0.tar.gz.

File metadata

  • Download URL: mcal_ai_autogen-0.5.0.tar.gz
  • Upload date:
  • Size: 15.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.11

File hashes

Hashes for mcal_ai_autogen-0.5.0.tar.gz
Algorithm Hash digest
SHA256 d1129f76bf13dd0c3ffb7fd906b274faf2fb3f53965091b31332cf1b88e7ae8d
MD5 11526fa184ec65cf4b0d7dbc9fc1162a
BLAKE2b-256 ea3413727e09208ae2b9d0bb132459e239b336d5f539dfe990792e710397694b

See more details on using hashes here.

File details

Details for the file mcal_ai_autogen-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mcal_ai_autogen-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b69035449dcf4b8afed6ce6917043881f20db41bdd57564d6ef81899728c2b8b
MD5 47a3e79e6e2c1f784ac3971d97fdd493
BLAKE2b-256 122a8a64c9fbd430b67c4b0b4689daa5e83cf9301d25f6adc4f82536c29b8b59

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page