Skip to main content

Universal observability for AI agents — tracing, flamegraph, live graph, and cost tracking for any framework or LLM provider.

Project description

plyra-trace

Behavioral tracing for agentic AI. OTel-compatible. Framework-agnostic.

License Python PyPI


The Problem

LLM observability tools trace tokens and latency. But agents don't just call models—they reason, delegate, coordinate, and enforce policy. Without agent-native tracing, you're blind to:

  • Multi-agent handoffs — which agent handled what, and why
  • Policy decisions — what guards fired, what actions were blocked
  • Cross-process coordination — how agents communicate via HTTP, queues, or MCP
  • Behavioral patterns — ReAct loops, chain-of-thought, tree search

plyra-trace is the instrumentation layer that understands agent behavior as a first-class primitive. Not a vendor lock-in. Not another dashboard. Just OpenTelemetry-native tracing with agent semantics baked in.


Install

pip install plyra-trace

Requires Python 3.11+. Works with any OTLP backend: Jaeger, Grafana Tempo, Phoenix, Langfuse, Datadog, or the future plyra collector.


Quickstart

import plyra_trace

# Initialize (exports to console by default — no collector needed for dev)
pt = plyra_trace.init(project="my-agent")

@plyra_trace.agent(name="researcher")
def research_agent(query: str) -> dict:
    docs = web_search(query)
    return {"docs": docs}

@plyra_trace.tool(name="web-search")
def web_search(query: str) -> list:
    return [{"title": "Result 1", "snippet": "..."}]

# Run with session + user context
with plyra_trace.session("session-abc"):
    with plyra_trace.user("user-123"):
        result = research_agent("What are the top EV companies?")

pt.shutdown()

That's it. Traces export to console (dev mode). To send to an OTLP backend:

pt = plyra_trace.init(
    project="my-agent",
    endpoint="http://localhost:4318",  # Jaeger, Tempo, Phoenix, etc.
)

Core Concepts

Span Kinds

plyra-trace understands agent-specific span types. Every span has a plyra.span.kind attribute:

Span Kind Use Case Decorator
AGENT Autonomous reasoning block @agent()
TOOL External function or API call @tool()
GUARD Policy/safety check (plyra-native) @guard()
LLM Language model call @llm()
CHAIN Glue code linking pipeline steps @trace()
RETRIEVER Vector/document retrieval @trace(kind=...)
ROUTER Agent routing/delegation (plyra-native) @trace(kind=...)

All spans are OpenInference-compatible. Both plyra.span.kind and openinference.span.kind are set automatically.


Decorators

@plyra_trace.trace()                    # Root span (CHAIN)
@plyra_trace.agent(name="planner")      # AGENT span
@plyra_trace.tool(name="search")        # TOOL span
@plyra_trace.guard(policy="safety")     # GUARD span
@plyra_trace.llm(model="gpt-4")         # LLM span

Works with both sync and async functions. Input/output auto-captured as JSON. Exceptions set span status to ERROR.


Context Managers

Propagate session, user, metadata, and tags to all child spans:

with plyra_trace.session("session-id"):
    with plyra_trace.user("user-id"):
        with plyra_trace.metadata({"tier": "enterprise"}):
            with plyra_trace.tags(["production"]):
                result = run_agent(query)

Context is thread-safe and doesn't leak outside the with block.


Multi-Agent Propagation

In-Process Handoffs

Nested decorators automatically create parent-child spans:

@plyra_trace.agent(name="orchestrator")
def orchestrator(query: str):
    research = research_agent(query)  # Child span
    return analysis_agent(research)   # Child span

Cross-Process Handoffs (HTTP, gRPC, Queues)

Inject context when calling another agent:

headers = {}
plyra_trace.inject_context(
    headers,
    agent_name="orchestrator",
    handoff_reason="needs research"
)
response = httpx.post("http://agent-2/task", headers=headers, json=payload)

Extract and continue the trace on the receiving side:

ctx = plyra_trace.extract_context(request.headers)
with plyra_trace.continue_trace(ctx):
    result = process_task(request.body)

This creates a distributed trace across agents. Works with HTTP, gRPC, message queues, or any carrier that supports headers.


Guard Integration

Guards are first-class citizens in plyra-trace. Not just logged—traced as GUARD spans:

from plyra_trace import GuardResult

@plyra_trace.guard(name="input-safety", policy="content-safety")
def check_input(text: str) -> GuardResult:
    triggered = "dangerous" in text.lower()
    return GuardResult(
        policy="content-safety",
        action="block" if triggered else "allow",
        triggered=triggered,
        confidence=0.95,
        details={"reason": "unsafe keyword detected"}
    )

@plyra_trace.agent(name="executor")
def executor(command: str):
    result = check_input(command)
    if result.triggered:
        return f"Blocked: {result.policy}"
    return execute(command)

Guard results automatically populate guard.policy, guard.action, guard.triggered, guard.confidence, and guard.details span attributes.


OTLP Backend Compatibility

plyra-trace exports to any OTLP-compatible backend:

Backend Protocol Endpoint Example
Jaeger HTTP/gRPC http://localhost:4318
Grafana Tempo HTTP/gRPC http://tempo:4318
Arize Phoenix HTTP http://localhost:6006/v1/traces
Langfuse HTTP https://cloud.langfuse.com/api/public
Datadog HTTP https://trace.agent.datadoghq.com
SigNoz HTTP/gRPC http://signoz:4318

Example with Jaeger:

pt = plyra_trace.init(
    project="my-agent",
    endpoint="http://localhost:4318",
    protocol="http/protobuf"  # or "grpc"
)

Example with Grafana Cloud:

pt = plyra_trace.init(
    project="my-agent",
    endpoint="https://otlp-gateway-prod-us-central-0.grafana.net/otlp",
    headers={"Authorization": f"Basic {api_key}"}
)

Programmatic Span Creation

When decorators aren't enough, use SpanBuilder:

from plyra_trace import SpanBuilder, SpanKind

with SpanBuilder("orchestrator-step", kind=SpanKind.AGENT) as span:
    span.set_input({"query": "analyze competitors"})
    span.set_agent(name="orchestrator", framework="custom")
    
    result = do_complex_work()
    
    span.set_output(result)
    span.add_event("checkpoint", attributes={"step": 1})

Semantic Conventions

plyra-trace extends OpenInference semantic conventions with agent-native attributes:

Agent Attributes (plyra-native)

  • agent.name — Agent identifier
  • agent.framework"langgraph", "crewai", "autogen", etc.
  • agent.pattern"ReAct", "Chain-of-Thought", "Tree-of-Thoughts"
  • agent.handoff.from — Source agent in a handoff
  • agent.handoff.to — Target agent in a handoff
  • agent.handoff.reason — Why the handoff occurred
  • agent.handoff.protocol"http", "grpc", "mcp", "queue"

Guard Attributes (plyra-native)

  • guard.policy — Policy name (e.g., "content-safety")
  • guard.action"allow", "block", "redact", "flag", "modify"
  • guard.triggered — Boolean: was the guard triggered?
  • guard.confidence — Float [0.0, 1.0]
  • guard.details — JSON string with structured details
  • guard.provider"plyra-guard", "guardrails-ai", "nemo-guardrails"

Standard OpenInference Attributes

All standard OpenInference attributes are fully supported:

  • input.value, output.value, input.mime_type, output.mime_type
  • llm.model_name, llm.provider, llm.token_count.*
  • tool.name, tool.description, tool.parameters
  • session.id, user.id, metadata, tag.tags

Compatibility with Existing Instrumentation

OpenInference

plyra-trace uses the same semantic conventions as OpenInference. Spans created by OpenInference auto-instrumentation (e.g., LlamaIndex, LangChain via Phoenix) work seamlessly with plyra-trace.

OpenLLMetry (Traceloop)

Use the compatibility processor to normalize OpenLLMetry spans:

from plyra_trace.compat import OpenLLMetrySpanProcessor

pt = plyra_trace.init(
    project="my-agent",
    endpoint="http://localhost:4318",
    span_processors=[OpenLLMetrySpanProcessor()]
)

Examples

Run examples:

python examples/basic_agent.py

Development

# Install uv if not already installed
pip install uv

# Sync dependencies
uv sync --all-extras

# Run tests
uv run pytest

# Lint
uv run ruff check .
uv run ruff format .

# Build
rm -rf dist/ && uv build

Roadmap

v0.1 (current):

  • ✅ Core SDK with decorators, context managers, span builders
  • ✅ OTLP export (HTTP + gRPC)
  • ✅ OpenInference semantic conventions
  • ✅ Agent-native attributes (guards, handoffs, routing)

v0.2 (planned):

  • Auto-instrumentation for LangGraph, CrewAI, AutoGen
  • Async context propagation improvements
  • Streaming span export
  • MCP (Model Context Protocol) propagation

v1.0 (future):

  • plyra collector with agent-aware sampling
  • GuardRails integration
  • Policy event correlation

License

Apache 2.0. See LICENSE.


Links


Built with ❤️ by the Plyra team. Questions? oss@plyra.dev

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

plyra_trace-2.0.1.tar.gz (89.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

plyra_trace-2.0.1-py3-none-any.whl (73.0 kB view details)

Uploaded Python 3

File details

Details for the file plyra_trace-2.0.1.tar.gz.

File metadata

  • Download URL: plyra_trace-2.0.1.tar.gz
  • Upload date:
  • Size: 89.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for plyra_trace-2.0.1.tar.gz
Algorithm Hash digest
SHA256 8ec86352b7fa25f2bbbd335ce8bdec3339a24cc6cc4f76843cca1ecc0204bc13
MD5 f5395bad171c5d896fc0c316222ac5b1
BLAKE2b-256 81b117b5f26ef699b238117f9fd27a2ff64b813a6fe695f8f279a18855ce7c05

See more details on using hashes here.

Provenance

The following attestation bundles were made for plyra_trace-2.0.1.tar.gz:

Publisher: publish.yml on plyraAI/plyra-trace

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file plyra_trace-2.0.1-py3-none-any.whl.

File metadata

  • Download URL: plyra_trace-2.0.1-py3-none-any.whl
  • Upload date:
  • Size: 73.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for plyra_trace-2.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 56d3734acd010a3fc585e7aa4ebbe8575a332be3b00813a549a727b1698f5976
MD5 4175d2d0e4311374eb623f10e35b88ec
BLAKE2b-256 11f57795c6088aa251be9ca8516d74717ec7820994816da7a2e27f25bbf663e2

See more details on using hashes here.

Provenance

The following attestation bundles were made for plyra_trace-2.0.1-py3-none-any.whl:

Publisher: publish.yml on plyraAI/plyra-trace

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page