Universal observability for AI agents — tracing, flamegraph, live graph, and cost tracking for any framework or LLM provider.
Project description
plyra-trace
Behavioral tracing for agentic AI. OTel-compatible. Framework-agnostic.
The Problem
LLM observability tools trace tokens and latency. But agents don't just call models—they reason, delegate, coordinate, and enforce policy. Without agent-native tracing, you're blind to:
- Multi-agent handoffs — which agent handled what, and why
- Policy decisions — what guards fired, what actions were blocked
- Cross-process coordination — how agents communicate via HTTP, queues, or MCP
- Behavioral patterns — ReAct loops, chain-of-thought, tree search
plyra-trace is the instrumentation layer that understands agent behavior as a first-class primitive. Not a vendor lock-in. Not another dashboard. Just OpenTelemetry-native tracing with agent semantics baked in.
Install
pip install plyra-trace
Requires Python 3.11+. Works with any OTLP backend: Jaeger, Grafana Tempo, Phoenix, Langfuse, Datadog, or the future plyra collector.
Quickstart
import plyra_trace
# Initialize (exports to console by default — no collector needed for dev)
pt = plyra_trace.init(project="my-agent")
@plyra_trace.agent(name="researcher")
def research_agent(query: str) -> dict:
docs = web_search(query)
return {"docs": docs}
@plyra_trace.tool(name="web-search")
def web_search(query: str) -> list:
return [{"title": "Result 1", "snippet": "..."}]
# Run with session + user context
with plyra_trace.session("session-abc"):
with plyra_trace.user("user-123"):
result = research_agent("What are the top EV companies?")
pt.shutdown()
That's it. Traces export to console (dev mode). To send to an OTLP backend:
pt = plyra_trace.init(
project="my-agent",
endpoint="http://localhost:4318", # Jaeger, Tempo, Phoenix, etc.
)
Core Concepts
Span Kinds
plyra-trace understands agent-specific span types. Every span has a plyra.span.kind attribute:
| Span Kind | Use Case | Decorator |
|---|---|---|
AGENT |
Autonomous reasoning block | @agent() |
TOOL |
External function or API call | @tool() |
GUARD |
Policy/safety check (plyra-native) | @guard() |
LLM |
Language model call | @llm() |
CHAIN |
Glue code linking pipeline steps | @trace() |
RETRIEVER |
Vector/document retrieval | @trace(kind=...) |
ROUTER |
Agent routing/delegation (plyra-native) | @trace(kind=...) |
All spans are OpenInference-compatible. Both plyra.span.kind and openinference.span.kind are set automatically.
Decorators
@plyra_trace.trace() # Root span (CHAIN)
@plyra_trace.agent(name="planner") # AGENT span
@plyra_trace.tool(name="search") # TOOL span
@plyra_trace.guard(policy="safety") # GUARD span
@plyra_trace.llm(model="gpt-4") # LLM span
Works with both sync and async functions. Input/output auto-captured as JSON. Exceptions set span status to ERROR.
Context Managers
Propagate session, user, metadata, and tags to all child spans:
with plyra_trace.session("session-id"):
with plyra_trace.user("user-id"):
with plyra_trace.metadata({"tier": "enterprise"}):
with plyra_trace.tags(["production"]):
result = run_agent(query)
Context is thread-safe and doesn't leak outside the with block.
Multi-Agent Propagation
In-Process Handoffs
Nested decorators automatically create parent-child spans:
@plyra_trace.agent(name="orchestrator")
def orchestrator(query: str):
research = research_agent(query) # Child span
return analysis_agent(research) # Child span
Cross-Process Handoffs (HTTP, gRPC, Queues)
Inject context when calling another agent:
headers = {}
plyra_trace.inject_context(
headers,
agent_name="orchestrator",
handoff_reason="needs research"
)
response = httpx.post("http://agent-2/task", headers=headers, json=payload)
Extract and continue the trace on the receiving side:
ctx = plyra_trace.extract_context(request.headers)
with plyra_trace.continue_trace(ctx):
result = process_task(request.body)
This creates a distributed trace across agents. Works with HTTP, gRPC, message queues, or any carrier that supports headers.
Guard Integration
Guards are first-class citizens in plyra-trace. Not just logged—traced as GUARD spans:
from plyra_trace import GuardResult
@plyra_trace.guard(name="input-safety", policy="content-safety")
def check_input(text: str) -> GuardResult:
triggered = "dangerous" in text.lower()
return GuardResult(
policy="content-safety",
action="block" if triggered else "allow",
triggered=triggered,
confidence=0.95,
details={"reason": "unsafe keyword detected"}
)
@plyra_trace.agent(name="executor")
def executor(command: str):
result = check_input(command)
if result.triggered:
return f"Blocked: {result.policy}"
return execute(command)
Guard results automatically populate guard.policy, guard.action, guard.triggered, guard.confidence, and guard.details span attributes.
OTLP Backend Compatibility
plyra-trace exports to any OTLP-compatible backend:
| Backend | Protocol | Endpoint Example |
|---|---|---|
| Jaeger | HTTP/gRPC | http://localhost:4318 |
| Grafana Tempo | HTTP/gRPC | http://tempo:4318 |
| Arize Phoenix | HTTP | http://localhost:6006/v1/traces |
| Langfuse | HTTP | https://cloud.langfuse.com/api/public |
| Datadog | HTTP | https://trace.agent.datadoghq.com |
| SigNoz | HTTP/gRPC | http://signoz:4318 |
Example with Jaeger:
pt = plyra_trace.init(
project="my-agent",
endpoint="http://localhost:4318",
protocol="http/protobuf" # or "grpc"
)
Example with Grafana Cloud:
pt = plyra_trace.init(
project="my-agent",
endpoint="https://otlp-gateway-prod-us-central-0.grafana.net/otlp",
headers={"Authorization": f"Basic {api_key}"}
)
Programmatic Span Creation
When decorators aren't enough, use SpanBuilder:
from plyra_trace import SpanBuilder, SpanKind
with SpanBuilder("orchestrator-step", kind=SpanKind.AGENT) as span:
span.set_input({"query": "analyze competitors"})
span.set_agent(name="orchestrator", framework="custom")
result = do_complex_work()
span.set_output(result)
span.add_event("checkpoint", attributes={"step": 1})
Semantic Conventions
plyra-trace extends OpenInference semantic conventions with agent-native attributes:
Agent Attributes (plyra-native)
agent.name— Agent identifieragent.framework—"langgraph","crewai","autogen", etc.agent.pattern—"ReAct","Chain-of-Thought","Tree-of-Thoughts"agent.handoff.from— Source agent in a handoffagent.handoff.to— Target agent in a handoffagent.handoff.reason— Why the handoff occurredagent.handoff.protocol—"http","grpc","mcp","queue"
Guard Attributes (plyra-native)
guard.policy— Policy name (e.g.,"content-safety")guard.action—"allow","block","redact","flag","modify"guard.triggered— Boolean: was the guard triggered?guard.confidence— Float [0.0, 1.0]guard.details— JSON string with structured detailsguard.provider—"plyra-guard","guardrails-ai","nemo-guardrails"
Standard OpenInference Attributes
All standard OpenInference attributes are fully supported:
input.value,output.value,input.mime_type,output.mime_typellm.model_name,llm.provider,llm.token_count.*tool.name,tool.description,tool.parameterssession.id,user.id,metadata,tag.tags
Compatibility with Existing Instrumentation
OpenInference
plyra-trace uses the same semantic conventions as OpenInference. Spans created by OpenInference auto-instrumentation (e.g., LlamaIndex, LangChain via Phoenix) work seamlessly with plyra-trace.
OpenLLMetry (Traceloop)
Use the compatibility processor to normalize OpenLLMetry spans:
from plyra_trace.compat import OpenLLMetrySpanProcessor
pt = plyra_trace.init(
project="my-agent",
endpoint="http://localhost:4318",
span_processors=[OpenLLMetrySpanProcessor()]
)
Examples
- basic_agent.py — Single agent with console output
- multi_agent.py — Multi-agent orchestration
- with_guard.py — Guard integration
Run examples:
python examples/basic_agent.py
Development
# Install uv if not already installed
pip install uv
# Sync dependencies
uv sync --all-extras
# Run tests
uv run pytest
# Lint
uv run ruff check .
uv run ruff format .
# Build
rm -rf dist/ && uv build
Roadmap
v0.1 (current):
- ✅ Core SDK with decorators, context managers, span builders
- ✅ OTLP export (HTTP + gRPC)
- ✅ OpenInference semantic conventions
- ✅ Agent-native attributes (guards, handoffs, routing)
v0.2 (planned):
- Auto-instrumentation for LangGraph, CrewAI, AutoGen
- Async context propagation improvements
- Streaming span export
- MCP (Model Context Protocol) propagation
v1.0 (future):
- plyra collector with agent-aware sampling
- GuardRails integration
- Policy event correlation
License
Apache 2.0. See LICENSE.
Links
- Documentation: plyraai.github.io/plyra-trace
- PyPI: pypi.org/project/plyra-trace
- GitHub: github.com/plyraAI/plyra-trace
- Issues: github.com/plyraAI/plyra-trace/issues
- Plyra Stack:
- plyra-guard — Action middleware for policy enforcement
- plyra-memory — Persistent structured memory
Built with ❤️ by the Plyra team. Questions? oss@plyra.dev
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file plyra_trace-2.0.4.tar.gz.
File metadata
- Download URL: plyra_trace-2.0.4.tar.gz
- Upload date:
- Size: 93.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44e06a7f34b0d7b78d3505d6a66a725ffb82dd0234eb6b3a2e3dece3957019ec
|
|
| MD5 |
61d9f0d81229b24c3cda765a8027d924
|
|
| BLAKE2b-256 |
df0a6e3d55e767f96ca61487f3dff61d9c8ead59196b89085decada9a47e59a9
|
Provenance
The following attestation bundles were made for plyra_trace-2.0.4.tar.gz:
Publisher:
publish.yml on plyraAI/plyra-trace
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
plyra_trace-2.0.4.tar.gz -
Subject digest:
44e06a7f34b0d7b78d3505d6a66a725ffb82dd0234eb6b3a2e3dece3957019ec - Sigstore transparency entry: 1005133541
- Sigstore integration time:
-
Permalink:
plyraAI/plyra-trace@f50242bce4bcde0dfc354f36b9b835aad6e6e13e -
Branch / Tag:
refs/tags/v2.0.4 - Owner: https://github.com/plyraAI
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@f50242bce4bcde0dfc354f36b9b835aad6e6e13e -
Trigger Event:
push
-
Statement type:
File details
Details for the file plyra_trace-2.0.4-py3-none-any.whl.
File metadata
- Download URL: plyra_trace-2.0.4-py3-none-any.whl
- Upload date:
- Size: 77.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b58494fb62ffdfa5263fc292612c294d2932d3b78e7668b18c61095c08a42b74
|
|
| MD5 |
f1eed4fbe7bed41fb0ba0f1cf14b2291
|
|
| BLAKE2b-256 |
efd597aba50b5c3da74d936f8e5e31c03415772f28524e5f579ae6a955388376
|
Provenance
The following attestation bundles were made for plyra_trace-2.0.4-py3-none-any.whl:
Publisher:
publish.yml on plyraAI/plyra-trace
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
plyra_trace-2.0.4-py3-none-any.whl -
Subject digest:
b58494fb62ffdfa5263fc292612c294d2932d3b78e7668b18c61095c08a42b74 - Sigstore transparency entry: 1005133542
- Sigstore integration time:
-
Permalink:
plyraAI/plyra-trace@f50242bce4bcde0dfc354f36b9b835aad6e6e13e -
Branch / Tag:
refs/tags/v2.0.4 - Owner: https://github.com/plyraAI
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@f50242bce4bcde0dfc354f36b9b835aad6e6e13e -
Trigger Event:
push
-
Statement type: