A comprehensive library for tracking the input, thought process, and output of AI agents
Project description
Agent Transparency
A comprehensive Python library for tracking the input, thought process, and output of AI agents. Provides extreme transparency into agent behavior for debugging, auditing, and understanding agent decisions.
Features
- Comprehensive Event Tracking: Log inputs, thinking processes, LLM calls, graph executions, outputs, and errors
- Multiple Output Destinations: Write to files (JSONL), console, or Kafka streams
- LangGraph Integration: Built-in support for tracking LangGraph node executions and state transitions
- Async & Sync APIs: Both asynchronous and synchronous interfaces for flexibility
- Context Management: Track related events across sessions and conversations
- Real-time Viewer: Optional web-based viewer for monitoring agent activity in real-time
- Lightweight: Minimal dependencies for core functionality
- Type-Safe: Full type hints and dataclass-based event structures
Installation
Basic Installation
pip install agent-transparency
With Optional Features
# Install with UI viewer support
pip install agent-transparency[ui]
# Install with Kafka streaming support
pip install agent-transparency[kafka]
# Install with all optional features
pip install agent-transparency[all]
# Install with development dependencies
pip install agent-transparency[dev]
Quick Start
from transparency import (
TransparencyManager,
create_transparency_manager,
ThinkingPhase,
)
# Create a transparency manager
transparency = create_transparency_manager(
agent_id="my-agent",
file_path="./logs",
)
# Start the manager
await transparency.start()
# Log events
await transparency.log_input_received("User asks: What's the weather?")
await transparency.log_thinking_step(
ThinkingPhase.ANALYSIS,
"Analyzing user request for weather information"
)
await transparency.log_llm_request_start(
model_name="gpt-4",
prompt="Get weather for user location"
)
await transparency.log_output_generated(
"The weather is sunny, 72°F",
target="user"
)
# Stop when done (flushes remaining events)
await transparency.stop()
Event Types
The library supports comprehensive event tracking across the entire agent lifecycle:
Lifecycle Events
AGENT_STARTUP- Agent initializationAGENT_SHUTDOWN- Agent termination
Input Events
INPUT_RECEIVED- Raw input receivedINPUT_PARSED- Input parsed and validatedINPUT_VALIDATED- Input validation completeINPUT_REJECTED- Input rejected
Thinking Events
THINKING_START- Begin thinking processTHINKING_STEP- Individual reasoning stepTHINKING_DECISION- Decision madeTHINKING_END- Thinking process complete
LangGraph Events
GRAPH_INVOKE_START- Graph execution startsGRAPH_NODE_ENTER- Entering a nodeGRAPH_NODE_EXIT- Exiting a nodeGRAPH_CONDITIONAL_ROUTE- Conditional routing decisionGRAPH_INVOKE_END- Graph execution complete
LLM Events
LLM_REQUEST_START- LLM call initiatedLLM_RESPONSE_RECEIVED- LLM response receivedLLM_ERROR- LLM call failed
Output Events
OUTPUT_GENERATED- Output createdOUTPUT_DISPATCHED- Output sent to target
Action Events
ACTION_PLANNED- Action plannedACTION_DISPATCHED- Action sentACTION_COMPLETED- Action finished successfullyACTION_FAILED- Action failed
State Events
STATE_SNAPSHOT- Full state captureSTATE_TRANSITION- State changed
Error Events
ERROR_OCCURRED- Error detectedERROR_RECOVERED- Error recoveredERROR_FATAL- Fatal error
Usage Examples
LangGraph Integration
from transparency import TransparencyManager, LangGraphNodeType
transparency = TransparencyManager(agent_id="langgraph-agent")
await transparency.start()
# Track graph execution
await transparency.log_graph_invoke_start(initial_state={"messages": []})
# Track node execution
await transparency.log_node_enter(
node_name="planner",
node_type=LangGraphNodeType.PLANNER,
state_before=state
)
# Use context manager for automatic entry/exit tracking
async with transparency.trace_node("executor", LangGraphNodeType.EXECUTOR, state):
# Your node logic here
result = await execute_plan()
await transparency.log_graph_invoke_end(final_state=state)
LLM Call Tracking
# Manual tracking
await transparency.log_llm_request_start(
model_name="gpt-4",
prompt="Analyze this data",
system_prompt="You are a data analyst"
)
response = await llm.generate(prompt)
await transparency.log_llm_response(
model_name="gpt-4",
prompt="Analyze this data",
response=response.text,
input_tokens=response.usage.input_tokens,
output_tokens=response.usage.output_tokens,
latency_ms=response.latency
)
# Or use context manager
async with transparency.trace_llm_call("gpt-4", prompt) as ctx:
response = await llm.generate(prompt)
ctx["response"] = response.text
Context Management
# Create a context for tracking related events
context = transparency.create_context(
session_id="session-123",
conversation_id="conv-456"
)
transparency.set_context(context)
# All subsequent events will include this context
await transparency.log_input_received("Hello")
# Or use context manager
async with transparency.context_scope(session_id="session-123"):
await transparency.log_input_received("Hello")
await transparency.log_thinking_step(...)
# Context automatically restored after block
Thinking Process Tracking
# Track detailed reasoning
await transparency.log_thinking_start("Processing user request")
await transparency.log_thinking_step(
phase=ThinkingPhase.PERCEPTION,
description="Understanding user intent",
reasoning="User is asking about weather conditions"
)
await transparency.log_thinking_step(
phase=ThinkingPhase.PLANNING,
description="Planning data retrieval",
considerations=[
"Need user location",
"Check weather API availability",
"Format response appropriately"
]
)
await transparency.log_thinking_decision(
decision="Fetch weather data from OpenWeatherMap API",
rationale="Most reliable source with current conditions",
alternatives=[
{"option": "Weather.gov", "reason_rejected": "Limited coverage"},
{"option": "AccuWeather", "reason_rejected": "Requires premium API"}
],
confidence=0.95
)
await transparency.log_thinking_end("Request processing complete")
Error Tracking
try:
result = await risky_operation()
except Exception as e:
await transparency.log_error(
error_type="APIError",
message=str(e),
exception=e,
context={"operation": "weather_fetch", "retry_count": 3},
recoverable=True
)
Synchronous Usage
from transparency import SyncTransparencyManager
# Wrap async manager for sync contexts (like LangGraph nodes)
async_manager = TransparencyManager(agent_id="sync-agent")
sync_transparency = SyncTransparencyManager(async_manager)
# Use in synchronous functions
def my_langgraph_node(state):
sync_transparency.log_node_enter("my_node", LangGraphNodeType.CUSTOM, state)
# Do work...
sync_transparency.log_node_exit("my_node", LangGraphNodeType.CUSTOM, state, new_state)
return new_state
Configuration
Output Destinations
from transparency import TransparencyConfig, OutputDestination
config = TransparencyConfig(
enabled=True,
destinations=[
OutputDestination.FILE,
OutputDestination.CONSOLE,
OutputDestination.KAFKA
],
# File settings
file_path="./transparency_logs",
# Kafka settings (requires aiokafka)
kafka_topic="agent.transparency",
kafka_broker=kafka_broker_instance,
# Filtering
min_severity=Severity.DEBUG,
event_type_filter=[], # Empty = all events
# Performance
buffer_size=100,
flush_interval_seconds=1.0,
async_mode=True,
# Formatting
pretty_print=True,
include_stack_traces=True
)
transparency = TransparencyManager(agent_id="my-agent", config=config)
Severity Levels
Events are logged with severity levels for filtering:
TRACE- Finest-grained debuggingDEBUG- Detailed diagnostic informationINFO- General informational messagesWARNING- Warning messagesERROR- Error eventsCRITICAL- Critical failures
# Only log INFO and above
config.min_severity = Severity.INFO
Real-time Viewer
The library includes an optional web-based viewer for monitoring agent activity in real-time.
Starting the Viewer
# Watch a log file
transparency-viewer --log-path ./logs/my-agent_transparency.jsonl
# Or from Kafka
transparency-viewer --kafka-bootstrap localhost:9092 --kafka-topic agent.my-agent.transparency
# Custom port
transparency-viewer --log-path ./logs/my-agent_transparency.jsonl --port 8080
Then open http://localhost:8765 in your browser.
Programmatic Usage
from viewer.viewer_server import TransparencyViewerServer, ServerConfig, SourceType
# File-based viewer
config = ServerConfig(
port=8765,
source_type=SourceType.FILE,
log_path="./logs/agent_transparency.jsonl"
)
server = TransparencyViewerServer(config)
await server.start()
# Keep running
await asyncio.Event().wait()
Output Format
Events are logged in JSONL format (newline-delimited JSON):
{
"event_type": "thinking.step",
"metadata": {
"event_id": "550e8400-e29b-41d4-a716-446655440000",
"timestamp": "2024-01-10T15:30:45.123456Z",
"agent_id": "my-agent",
"session_id": "session-123",
"conversation_id": "conv-456",
"sequence_number": 42,
"severity": "debug",
"tags": ["thinking", "analysis"]
},
"payload": {
"phase": "analysis",
"description": "Analyzing user request",
"reasoning": "User wants weather information",
"considerations": ["Location needed", "API selection"],
"confidence_score": 0.95
}
}
Best Practices
- Start/Stop the Manager: Always call
start()andstop()to ensure proper buffering and flushing - Use Context Managers: Leverage
trace_node(),trace_llm_call(), andcontext_scope()for automatic tracking - Set Contexts: Use session and conversation IDs to correlate related events
- Filter Appropriately: Set
min_severityandevent_type_filterin production to reduce overhead - Async Mode: Keep
async_mode=Truefor better performance in high-throughput scenarios - Monitor Buffer Size: Adjust
buffer_sizebased on event volume
Requirements
- Python 3.12+
- Core: No external dependencies
- UI Viewer:
aiohttp>=3.8.0 - Kafka:
aiokafka>=0.8.0
License
MIT License - see LICENSE file for details
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Links
- PyPI: https://pypi.org/project/agent-transparency/
- GitHub: https://github.com/agentsquad/transparency
- Issues: https://github.com/agentsquad/transparency/issues
Author
Agent Squad - shine.six.s6@gmail.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agent_transparency-0.0.1.tar.gz.
File metadata
- Download URL: agent_transparency-0.0.1.tar.gz
- Upload date:
- Size: 32.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7b5ce4f08c2fc2dbd2d08dc536ec00b47b41cc7e2b36c21e93b87430b6dea8e
|
|
| MD5 |
74f7c95654e7899ca971ee0f4b2cd5d3
|
|
| BLAKE2b-256 |
9b246d69450b5801dfda71fb116d44949b668fe1ed4f7b38be832e8aec7996f2
|
Provenance
The following attestation bundles were made for agent_transparency-0.0.1.tar.gz:
Publisher:
publish.yml on lustre-lab/agent-transparency
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agent_transparency-0.0.1.tar.gz -
Subject digest:
b7b5ce4f08c2fc2dbd2d08dc536ec00b47b41cc7e2b36c21e93b87430b6dea8e - Sigstore transparency entry: 813209372
- Sigstore integration time:
-
Permalink:
lustre-lab/agent-transparency@d810bed61f4236329a7bcfb16c3942ac0cd91318 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/lustre-lab
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d810bed61f4236329a7bcfb16c3942ac0cd91318 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file agent_transparency-0.0.1-py3-none-any.whl.
File metadata
- Download URL: agent_transparency-0.0.1-py3-none-any.whl
- Upload date:
- Size: 33.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e4744ca05a33404c64aa8e3b307ae1c809b8b71560c9d4661eb0fd7f36bc140
|
|
| MD5 |
79bcd91a60a82d10185365aa1b06d436
|
|
| BLAKE2b-256 |
54d223bf6e2e840d1a838a564d1d03d88f472058bce8f8ff7c3ae72266507cd0
|
Provenance
The following attestation bundles were made for agent_transparency-0.0.1-py3-none-any.whl:
Publisher:
publish.yml on lustre-lab/agent-transparency
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
agent_transparency-0.0.1-py3-none-any.whl -
Subject digest:
0e4744ca05a33404c64aa8e3b307ae1c809b8b71560c9d4661eb0fd7f36bc140 - Sigstore transparency entry: 813209373
- Sigstore integration time:
-
Permalink:
lustre-lab/agent-transparency@d810bed61f4236329a7bcfb16c3942ac0cd91318 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/lustre-lab
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d810bed61f4236329a7bcfb16c3942ac0cd91318 -
Trigger Event:
workflow_dispatch
-
Statement type: