Framework-agnostic agent observability & LLM tracing. Supports OpenAI, Anthropic, CrewAI, AutoGen, LangGraph, and OpenTelemetry.
Project description
SpyLLM Python SDK
Framework-agnostic agent observability and automatic LLM tracing. Works with OpenAI, Anthropic, CrewAI, AutoGen, LangGraph, and any OpenTelemetry-instrumented framework.
See it in action — view a live trace on the dashboard
Prerequisites
You need a free SpyLLM account and an API key to use this SDK.
- Sign up at spyllm.dev/sign-up
- Go to Settings → API Keys and click Create API Key
- Copy the key — it is only shown once
Install
pip install spyllm
With provider extras:
pip install spyllm[openai] # OpenAI
pip install spyllm[anthropic] # Anthropic
pip install spyllm[otel] # OpenTelemetry export
Quick Start
import spyllm
spyllm.init(api_key="sk-...")
# That's it. Every OpenAI and Anthropic call is now automatically traced.
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
# Prompt, response, tokens, cost, and latency are captured automatically.
Open the dashboard to see traces as they arrive.
Agent Observability
Wrap multi-agent workflows with agent_span() to automatically link every nested LLM call into a trace DAG. Nested spans inherit trace_id and set parent_span_id automatically — no manual ID threading needed.
import spyllm
from openai import OpenAI
spyllm.init(api_key="sk-...")
client = OpenAI()
with spyllm.agent_span("orchestrator", role="orchestrator"):
plan = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Plan the research task"}],
)
with spyllm.agent_span("researcher", role="worker"):
research = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Research quantum computing"}],
)
with spyllm.agent_span("writer", role="worker"):
report = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write the report"}],
)
# All spans share the same trace_id.
# Open the dashboard to see the full agent topology as an interactive DAG.
Async Support
The async variant works identically with asyncio, which most agent frameworks use:
async with spyllm.async_agent_span("planner", role="planner") as ctx:
print(ctx.trace_id) # auto-generated
print(ctx.span_id) # unique per span
print(ctx.parent_span_id) # from outer span, if any
Reading Span Context
Access the current span anywhere in your code:
ctx = spyllm.get_current_span()
if ctx:
print(f"Currently inside: {ctx.agent_name} (trace={ctx.trace_id})")
Span Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
required | Human-readable agent name |
role |
str |
"worker" |
Agent role for topology grouping |
operation |
str |
"invoke_agent" |
One of: invoke_agent, create_agent, execute_tool, chat |
trace_id |
str |
auto-inherited | Override trace ID |
framework |
str |
None |
Framework identifier: crewai, autogen, langgraph, custom |
input_source |
str |
None |
What triggered this span: user, agent:planner, tool:search |
Framework Adapters
One line of code gives you full agent topology for supported frameworks. Adapters use OpenTelemetry instrumentation libraries when available, with lightweight monkey-patch fallbacks.
CrewAI
import spyllm
spyllm.init(api_key="sk-...")
spyllm.adapters.instrument_crewai()
from crewai import Agent, Task, Crew
# All CrewAI agent/task/tool spans are captured automatically.
Any OTel-Instrumented Framework (Zero SDK Code)
Point any framework's OpenTelemetry exporter at SpyLLM:
export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.spyllm.dev
export OTEL_EXPORTER_OTLP_HEADERS="X-API-Key=sk-your-key"
This works with CrewAI (opentelemetry-instrumentation-crewai), LangGraph (LANGSMITH_OTEL_ENABLED=true), AutoGen, PydanticAI, and any framework that emits gen_ai.* semantic convention spans.
What Gets Captured
Every LLM call automatically records:
- Prompt — full message history sent to the model
- Response — the model's output
- Token count — input + output tokens
- Cost — estimated USD cost based on model pricing
- Latency — wall-clock time for the API call
- Tool calls — if the model invoked tools/functions
- Errors — failed calls with the exception message
- Trace ID / Span ID — every call gets topology IDs, even standalone ones
- Agent Topology — interactive DAG visualization in the dashboard
With agent_span() you additionally get:
- Parent Span ID — builds the parent-child DAG across agents
- Agent Role — orchestrator, worker, planner, etc.
- Operation Name — invoke_agent, execute_tool, chat, create_agent
- Framework — crewai, autogen, langgraph, custom
Supported Providers
| Provider | Auto-instrumented |
|---|---|
| OpenAI | Yes |
| Anthropic | Yes |
Supported Agent Frameworks
| Framework | Integration |
|---|---|
| CrewAI | Adapter (spyllm.adapters.instrument_crewai()) or OTel |
| AutoGen | OTel env vars |
| LangGraph | OTel env vars |
| PydanticAI | OTel env vars |
| Custom | agent_span() context manager |
Advanced Usage
Manual Tracing
from spyllm import SpyLLMClient
client = SpyLLMClient(api_key="sk-...", base_url="https://api.spyllm.dev")
client.trace(
agent_name="my-agent",
prompt="What is 2+2?",
response="4",
token_count=15,
cost_usd=0.001,
)
Decorator
from spyllm import agent_trace, init
init(api_key="sk-...")
@agent_trace("my-pipeline")
def run_pipeline(query: str) -> str:
# your code here
return result
Disable Auto-instrumentation
spyllm.init(api_key="sk-...", instrument=False)
Documentation
Changelog
See GitHub Releases for a full changelog.
License
Proprietary — Copyright SpyLLM. All rights reserved.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spyllm-0.3.9.tar.gz.
File metadata
- Download URL: spyllm-0.3.9.tar.gz
- Upload date:
- Size: 13.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
90fa52a9370e1a2dfaaf252f1536754251257c1a5c6583a318d5407334d80793
|
|
| MD5 |
a86c45da7b2d58914aa5567ff65e7d72
|
|
| BLAKE2b-256 |
86483bd62c5105f111e8d5e7071b6a2d24763f2475d052abec33d3b6b090e5e8
|
File details
Details for the file spyllm-0.3.9-py3-none-any.whl.
File metadata
- Download URL: spyllm-0.3.9-py3-none-any.whl
- Upload date:
- Size: 16.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
adc754df3141345b54888dd27a87cdef2c14a63cd5f9ad04f799ab836829175b
|
|
| MD5 |
16520fec4ca2d52d9089c28362492b50
|
|
| BLAKE2b-256 |
f5f21892e5e86eccb4cc60b663b3ceb50eb07b40d1cb6ec42632537832468e59
|