Skip to main content

Framework-agnostic agent observability & LLM tracing. Supports OpenAI, Anthropic, CrewAI, AutoGen, LangGraph, and OpenTelemetry.

Project description

SpyLLM Python SDK

PyPI Version PyPI Downloads License: MIT Python GitHub Stars

Framework-agnostic agent observability and automatic LLM tracing. Works with OpenAI, Anthropic, CrewAI, AutoGen, LangGraph, and any OpenTelemetry-instrumented framework.

See it in actionview a live trace on the dashboard

Prerequisites

You need a free SpyLLM account and an API key to use this SDK.

  1. Sign up at spyllm.dev/sign-up
  2. Go to Settings → API Keys and click Create API Key
  3. Copy the key — it is only shown once

Install

pip install spyllm

With provider extras:

pip install spyllm[openai]       # OpenAI
pip install spyllm[anthropic]    # Anthropic
pip install spyllm[otel]         # OpenTelemetry export

Quick Start

import spyllm

spyllm.init(api_key="sk-...")

# That's it. Every OpenAI and Anthropic call is now automatically traced.
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
# Prompt, response, tokens, cost, and latency are captured automatically.

Open the dashboard to see traces as they arrive.

Agent Observability

Wrap multi-agent workflows with agent_span() to automatically link every nested LLM call into a trace DAG. Nested spans inherit trace_id and set parent_span_id automatically — no manual ID threading needed.

import spyllm
from openai import OpenAI

spyllm.init(api_key="sk-...")
client = OpenAI()

with spyllm.agent_span("orchestrator", role="orchestrator"):
    plan = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Plan the research task"}],
    )

    with spyllm.agent_span("researcher", role="worker"):
        research = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Research quantum computing"}],
        )

    with spyllm.agent_span("writer", role="worker"):
        report = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Write the report"}],
        )

# All spans share the same trace_id.
# Open the dashboard to see the full agent topology as an interactive DAG.

Async Support

The async variant works identically with asyncio, which most agent frameworks use:

async with spyllm.async_agent_span("planner", role="planner") as ctx:
    print(ctx.trace_id)       # auto-generated
    print(ctx.span_id)        # unique per span
    print(ctx.parent_span_id) # from outer span, if any

Reading Span Context

Access the current span anywhere in your code:

ctx = spyllm.get_current_span()
if ctx:
    print(f"Currently inside: {ctx.agent_name} (trace={ctx.trace_id})")

Span Parameters

Parameter Type Default Description
name str required Human-readable agent name
role str "worker" Agent role for topology grouping
operation str "invoke_agent" One of: invoke_agent, create_agent, execute_tool, chat
trace_id str auto-inherited Override trace ID
framework str None Framework identifier: crewai, autogen, langgraph, custom
input_source str None What triggered this span: user, agent:planner, tool:search

Framework Adapters

One line of code gives you full agent topology for supported frameworks. Adapters use OpenTelemetry instrumentation libraries when available, with lightweight monkey-patch fallbacks.

CrewAI

import spyllm

spyllm.init(api_key="sk-...")
spyllm.adapters.instrument_crewai()

from crewai import Agent, Task, Crew
# All CrewAI agent/task/tool spans are captured automatically.

Any OTel-Instrumented Framework (Zero SDK Code)

Point any framework's OpenTelemetry exporter at SpyLLM:

export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.spyllm.dev
export OTEL_EXPORTER_OTLP_HEADERS="X-API-Key=sk-your-key"

This works with CrewAI (opentelemetry-instrumentation-crewai), LangGraph (LANGSMITH_OTEL_ENABLED=true), AutoGen, PydanticAI, and any framework that emits gen_ai.* semantic convention spans.

What Gets Captured

Every LLM call automatically records:

  • Prompt — full message history sent to the model
  • Response — the model's output
  • Token count — input + output tokens
  • Cost — estimated USD cost based on model pricing
  • Latency — wall-clock time for the API call
  • Tool calls — if the model invoked tools/functions
  • Errors — failed calls with the exception message

With agent spans enabled, you also get:

  • Trace ID — groups all spans in a workflow execution
  • Span ID / Parent Span ID — builds the parent-child DAG
  • Agent Role — orchestrator, worker, planner, etc.
  • Operation Name — invoke_agent, execute_tool, chat, create_agent
  • Framework — crewai, autogen, langgraph, custom
  • Agent Topology — interactive DAG visualization in the dashboard

Supported Providers

Provider Auto-instrumented
OpenAI Yes
Anthropic Yes

Supported Agent Frameworks

Framework Integration
CrewAI Adapter (spyllm.adapters.instrument_crewai()) or OTel
AutoGen OTel env vars
LangGraph OTel env vars
PydanticAI OTel env vars
Custom agent_span() context manager

Advanced Usage

Manual Tracing

from spyllm import SpyLLMClient

client = SpyLLMClient(api_key="sk-...", base_url="https://api.spyllm.dev")
client.trace(
    agent_name="my-agent",
    prompt="What is 2+2?",
    response="4",
    token_count=15,
    cost_usd=0.001,
)

Decorator

from spyllm import agent_trace, init

init(api_key="sk-...")

@agent_trace("my-pipeline")
def run_pipeline(query: str) -> str:
    # your code here
    return result

Disable Auto-instrumentation

spyllm.init(api_key="sk-...", instrument=False)

Self-hosted

Point the SDK at your own instance:

spyllm.init(api_key="sk-...", base_url="https://your-host.com")

Documentation

Changelog

See GitHub Releases for a full changelog.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spyllm-0.3.1.tar.gz (13.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spyllm-0.3.1-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file spyllm-0.3.1.tar.gz.

File metadata

  • Download URL: spyllm-0.3.1.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for spyllm-0.3.1.tar.gz
Algorithm Hash digest
SHA256 9ddb2743f2e2e71c7088d84cfd34c3a29fec3cb48192cb1bbae098bc24ab7614
MD5 b29f331d5dc1da724b0c7543c2f23df4
BLAKE2b-256 fbd16604de72cc650dcc0ee1f210c4e07c798b711a8a59417f58034dd81e4c8c

See more details on using hashes here.

File details

Details for the file spyllm-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: spyllm-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for spyllm-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4eff0e62bf87ea4df7050e895567c76903aa92c4be96434f4b365fa4e27b441b
MD5 13285c250a1c8de539acf322b94f8711
BLAKE2b-256 3183b86a19cb716ca25c7ed6bc194497fba1a90ad02ac6867ac5cf96824371ca

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page