OpenTelemetry instrumentation for AutoGen v0.4+ (AgentChat)
Project description
TraceAI AutoGen Instrumentation
OpenTelemetry instrumentation for Microsoft AutoGen, providing comprehensive tracing for multi-agent conversations, tool executions, and LLM interactions.
Supports both AutoGen v0.2 (legacy) and v0.4 (AgentChat).
Installation
pip install traceAI-autogen
AutoGen Versions
This package supports both major AutoGen versions:
AutoGen v0.2 (legacy):
pip install autogen>=0.2.0
AutoGen v0.4 (AgentChat):
pip install autogen-agentchat>=0.4.0
Quick Start
Set Environment Variables
import os
os.environ["FI_API_KEY"] = "your-api-key"
os.environ["FI_SECRET_KEY"] = "your-secret-key"
os.environ["OPENAI_API_KEY"] = "your-openai-key"
Register Tracer Provider
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
trace_provider = register(
project_type=ProjectType.OBSERVE,
project_name="autogen_app"
)
Instrument AutoGen
from traceai_autogen import AutogenInstrumentor
AutogenInstrumentor().instrument(tracer_provider=trace_provider)
Or use the convenience function:
from traceai_autogen import instrument_autogen
instrumentor = instrument_autogen(tracer_provider=trace_provider)
Examples
AutoGen v0.2 (Legacy)
import autogen
from traceai_autogen import instrument_autogen
# Instrument AutoGen
instrument_autogen()
# Configure LLM
llm_config = {
"config_list": [{"model": "gpt-4", "api_key": os.environ["OPENAI_API_KEY"]}],
"temperature": 0,
}
# Create agents
assistant = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
system_message="You are a helpful AI assistant."
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
code_execution_config={"work_dir": "coding", "use_docker": False},
)
# Start conversation - automatically traced
chat_result = user_proxy.initiate_chat(
assistant,
message="Write a Python function to calculate fibonacci numbers."
)
AutoGen v0.4 (AgentChat)
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from traceai_autogen import instrument_autogen
# Instrument AutoGen
instrument_autogen()
async def main():
# Create model client
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Create agents
coder = AssistantAgent(
name="coder",
model_client=model_client,
system_message="You are a Python expert. Write clean, efficient code.",
)
reviewer = AssistantAgent(
name="reviewer",
model_client=model_client,
system_message="You review code and suggest improvements.",
)
# Create team
team = RoundRobinGroupChat(
participants=[coder, reviewer],
termination_condition=MaxMessageTermination(max_messages=6),
)
# Run team task - automatically traced
result = await team.run(task="Write a Python class for a binary search tree.")
print(result.messages[-1].content)
asyncio.run(main())
AutoGen v0.4 with Tools
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from traceai_autogen import instrument_autogen
instrument_autogen()
# Define tools
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny and 72F."
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Search results for: {query}"
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
tools=[get_weather, search_web],
system_message="You are a helpful assistant with access to tools.",
)
# Tool calls are automatically traced
response = await agent.on_messages(
[{"role": "user", "content": "What's the weather in San Francisco?"}],
cancellation_token=None
)
print(response.chat_message.content)
asyncio.run(main())
AutoGen v0.4 with Streaming
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from traceai_autogen import instrument_autogen
instrument_autogen()
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="writer",
model_client=model_client,
system_message="You are a creative writer.",
)
team = RoundRobinGroupChat(participants=[agent])
# Streaming is also traced
async for message in team.run_stream(task="Write a haiku about coding"):
print(message)
asyncio.run(main())
Multi-Agent Code Review Team
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import SelectorGroupChat
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient
from traceai_autogen import instrument_autogen
instrument_autogen()
async def main():
model = OpenAIChatCompletionClient(model="gpt-4o")
# Create specialized agents
architect = AssistantAgent(
name="architect",
model_client=model,
system_message="You are a software architect. Design system architecture.",
)
developer = AssistantAgent(
name="developer",
model_client=model,
system_message="You implement code based on architectural designs.",
)
tester = AssistantAgent(
name="tester",
model_client=model,
system_message="You write tests and identify edge cases.",
)
# Selector-based team that picks the right agent
team = SelectorGroupChat(
participants=[architect, developer, tester],
model_client=model,
termination_condition=MaxMessageTermination(max_messages=10),
)
result = await team.run(
task="Design and implement a REST API for a todo list application."
)
for msg in result.messages:
print(f"{msg.source}: {msg.content[:100]}...")
asyncio.run(main())
Features
AutoGen v0.2 (Legacy) Features
- Agent Conversations: Traces
initiate_chatcalls between agents - Reply Generation: Traces
generate_replyfor each agent response - Function Execution: Traces tool/function calls via
execute_function - Full Context: Captures messages, responses, and metadata
AutoGen v0.4 (AgentChat) Features
- Agent Runs: Traces
on_messagesfor all agent types - Team Orchestration: Traces
runandrun_streamfor teams - Tool Execution: Automatic tracing of tool function calls
- Streaming Support: Full tracing for streaming responses
- Handoffs: Traces agent handoffs in Swarm teams
- Token Usage: Captures token metrics from responses
Traced Attributes
Agent Spans
| Attribute | Description |
|---|---|
autogen.span_kind |
Type of span (agent_run, team_run, tool_call) |
autogen.agent.name |
Agent name |
autogen.agent.type |
Agent class name |
autogen.agent.tool_count |
Number of tools available |
autogen.agent.has_memory |
Whether agent has memory |
gen_ai.request.model |
Model name |
Team Spans
| Attribute | Description |
|---|---|
autogen.team.type |
Team class name |
autogen.team.participant_count |
Number of participants |
autogen.team.participants |
JSON list of agent names |
autogen.team.max_turns |
Maximum turns configured |
autogen.team.termination_condition |
Termination condition type |
Task/Run Spans
| Attribute | Description |
|---|---|
autogen.run.id |
Unique run identifier |
autogen.run.method |
Method name (run, run_stream) |
autogen.task.content |
Task/prompt content |
autogen.task.message_count |
Number of messages |
autogen.task.stop_reason |
Why the task stopped |
Tool Spans
| Attribute | Description |
|---|---|
autogen.tool.name |
Tool function name |
autogen.tool.description |
Tool description |
autogen.tool.args |
Tool arguments (JSON) |
autogen.tool.result |
Tool return value |
autogen.tool.is_error |
Whether tool failed |
autogen.tool.duration_ms |
Execution time |
Usage Metrics (GenAI Conventions)
| Attribute | Description |
|---|---|
gen_ai.usage.input_tokens |
Input/prompt tokens |
gen_ai.usage.output_tokens |
Output/completion tokens |
gen_ai.usage.total_tokens |
Total tokens used |
Error Attributes
| Attribute | Description |
|---|---|
autogen.is_error |
Whether an error occurred |
autogen.error.type |
Exception type |
autogen.error.message |
Error message |
Model Provider Detection
The instrumentor automatically detects model providers:
| Pattern | Provider |
|---|---|
gpt-*, o1-*, o3-* |
openai |
claude-* |
anthropic |
gemini* |
|
mistral* |
mistral |
deepseek* |
deepseek |
groq* |
groq |
ollama* |
ollama |
Uninstrumenting
from traceai_autogen import AutogenInstrumentor
instrumentor = AutogenInstrumentor()
instrumentor.instrument()
# ... use AutoGen ...
# Remove instrumentation
instrumentor.uninstrument()
Integration with FutureAGI
from fi_instrumentation import register
from fi_instrumentation.fi_types import ProjectType
from traceai_autogen import AutogenInstrumentor
# Register with FutureAGI
trace_provider = register(
api_key="your-api-key",
project_type=ProjectType.OBSERVE,
project_name="my-autogen-app",
)
# Instrument AutoGen
AutogenInstrumentor().instrument(tracer_provider=trace_provider)
Running Tests
# Run all tests
pytest tests/
# Run with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/test_v04_wrapper.py -v
Requirements
- Python >= 3.9
- opentelemetry-api >= 1.0.0
- opentelemetry-sdk >= 1.0.0
- fi-instrumentation-otel >= 0.1.11
For v0.2: autogen >= 0.2.0 For v0.4: autogen-agentchat >= 0.4.0
License
MIT License - see LICENSE file for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file traceai_autogen-0.2.0.tar.gz.
File metadata
- Download URL: traceai_autogen-0.2.0.tar.gz
- Upload date:
- Size: 13.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5d91de5f954aea092ad5983bb311345c53254cd3c899b20cded3db6db73bb0b7
|
|
| MD5 |
d9acf4ae01468d02be6bc043e8d0b7e9
|
|
| BLAKE2b-256 |
def5245595fdb2a05ddfd7175597f84b91c021f5fd347a5dfe6c23842c57292e
|
File details
Details for the file traceai_autogen-0.2.0-py3-none-any.whl.
File metadata
- Download URL: traceai_autogen-0.2.0-py3-none-any.whl
- Upload date:
- Size: 12.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.21 {"installer":{"name":"uv","version":"0.9.21","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3bdbcd972b1b0fe22230ce345b75ea7eb8decd620356fee688c1355be3f49670
|
|
| MD5 |
4194a7764aac4cc27f64e66792744672
|
|
| BLAKE2b-256 |
d8905682a583d1f2347dfc45e147b1edc2e74ae212769a4d4c313e4dc7a86eb7
|