Skip to main content

Framework-agnostic agent observability & LLM tracing. Supports OpenAI, Anthropic, CrewAI, AutoGen, LangGraph, and OpenTelemetry.

Project description

SpyLLM Python SDK

PyPI Version PyPI Downloads Python GitHub Stars

Framework-agnostic agent observability and automatic LLM tracing. Works with OpenAI, Anthropic, CrewAI, AutoGen, LangGraph, and any OpenTelemetry-instrumented framework.

See it in actionview a live trace on the dashboard

Prerequisites

You need a free SpyLLM account and an API key to use this SDK.

  1. Sign up at spyllm.dev/sign-up
  2. Go to Settings → API Keys and click Create API Key
  3. Copy the key — it is only shown once

Install

pip install spyllm

With provider extras:

pip install spyllm[openai]       # OpenAI
pip install spyllm[anthropic]    # Anthropic
pip install spyllm[otel]         # OpenTelemetry export

Quick Start

import spyllm

spyllm.init(api_key="sk-...")

# That's it. Every OpenAI and Anthropic call is now automatically traced.
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
# Prompt, response, tokens, cost, and latency are captured automatically.

Open the dashboard to see traces as they arrive.

Agent Observability

Wrap multi-agent workflows with agent_span() to automatically link every nested LLM call into a trace DAG. Nested spans inherit trace_id and set parent_span_id automatically — no manual ID threading needed.

import spyllm
from openai import OpenAI

spyllm.init(api_key="sk-...")
client = OpenAI()

with spyllm.agent_span("orchestrator", role="orchestrator"):
    plan = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Plan the research task"}],
    )

    with spyllm.agent_span("researcher", role="worker"):
        research = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Research quantum computing"}],
        )

    with spyllm.agent_span("writer", role="worker"):
        report = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": "Write the report"}],
        )

# All spans share the same trace_id.
# Open the dashboard to see the full agent topology as an interactive DAG.

Async Support

The async variant works identically with asyncio, which most agent frameworks use:

async with spyllm.async_agent_span("planner", role="planner") as ctx:
    print(ctx.trace_id)       # auto-generated
    print(ctx.span_id)        # unique per span
    print(ctx.parent_span_id) # from outer span, if any

Reading Span Context

Access the current span anywhere in your code:

ctx = spyllm.get_current_span()
if ctx:
    print(f"Currently inside: {ctx.agent_name} (trace={ctx.trace_id})")

Span Parameters

Parameter Type Default Description
name str required Human-readable agent name
role str "worker" Agent role for topology grouping
operation str "invoke_agent" One of: invoke_agent, create_agent, execute_tool, chat
trace_id str auto-inherited Override trace ID
framework str None Framework identifier: crewai, autogen, langgraph, custom
input_source str None What triggered this span: user, agent:planner, tool:search

Framework Adapters

One line of code gives you full agent topology for supported frameworks. Adapters use OpenTelemetry instrumentation libraries when available, with lightweight monkey-patch fallbacks.

CrewAI

import spyllm

spyllm.init(api_key="sk-...")
spyllm.adapters.instrument_crewai()

from crewai import Agent, Task, Crew
# All CrewAI agent/task/tool spans are captured automatically.

Any OTel-Instrumented Framework (Zero SDK Code)

Point any framework's OpenTelemetry exporter at SpyLLM:

export OTEL_EXPORTER_OTLP_ENDPOINT=https://api.spyllm.dev
export OTEL_EXPORTER_OTLP_HEADERS="X-API-Key=sk-your-key"

This works with CrewAI (opentelemetry-instrumentation-crewai), LangGraph (LANGSMITH_OTEL_ENABLED=true), AutoGen, PydanticAI, and any framework that emits gen_ai.* semantic convention spans.

What Gets Captured

Every LLM call automatically records:

  • Prompt — full message history sent to the model
  • Response — the model's output
  • Token count — input + output tokens
  • Cost — estimated USD cost based on model pricing
  • Latency — wall-clock time for the API call
  • Tool calls — if the model invoked tools/functions
  • Errors — failed calls with the exception message
  • Trace ID / Span ID — every call gets topology IDs, even standalone ones
  • Agent Topology — interactive DAG visualization in the dashboard

With agent_span() you additionally get:

  • Parent Span ID — builds the parent-child DAG across agents
  • Agent Role — orchestrator, worker, planner, etc.
  • Operation Name — invoke_agent, execute_tool, chat, create_agent
  • Framework — crewai, autogen, langgraph, custom

Supported Providers

Provider Auto-instrumented
OpenAI Yes
Anthropic Yes

Supported Agent Frameworks

Framework Integration
CrewAI Adapter (spyllm.adapters.instrument_crewai()) or OTel
AutoGen OTel env vars
LangGraph OTel env vars
PydanticAI OTel env vars
Custom agent_span() context manager

Advanced Usage

Manual Tracing

from spyllm import SpyLLMClient

client = SpyLLMClient(api_key="sk-...", base_url="https://api.spyllm.dev")
client.trace(
    agent_name="my-agent",
    prompt="What is 2+2?",
    response="4",
    token_count=15,
    cost_usd=0.001,
)

Decorator

from spyllm import agent_trace, init

init(api_key="sk-...")

@agent_trace("my-pipeline")
def run_pipeline(query: str) -> str:
    # your code here
    return result

Disable Auto-instrumentation

spyllm.init(api_key="sk-...", instrument=False)

Documentation

Changelog

See GitHub Releases for a full changelog.

License

Proprietary — Copyright SpyLLM. All rights reserved.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spyllm-0.3.8.tar.gz (12.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spyllm-0.3.8-py3-none-any.whl (16.3 kB view details)

Uploaded Python 3

File details

Details for the file spyllm-0.3.8.tar.gz.

File metadata

  • Download URL: spyllm-0.3.8.tar.gz
  • Upload date:
  • Size: 12.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for spyllm-0.3.8.tar.gz
Algorithm Hash digest
SHA256 876bbcb3f7f6eadf82f10b7728dc507f46aa3c08659857c21bd95853ccc61203
MD5 b26f6a27d7108bc675ee8123757308ad
BLAKE2b-256 a56915f71f404ac3537f65344b9ad6e0cf57a4e062d4d86a3bc100ec126e7750

See more details on using hashes here.

Provenance

The following attestation bundles were made for spyllm-0.3.8.tar.gz:

Publisher: sdk-publish.yml on Yemnis/spyllm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file spyllm-0.3.8-py3-none-any.whl.

File metadata

  • Download URL: spyllm-0.3.8-py3-none-any.whl
  • Upload date:
  • Size: 16.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for spyllm-0.3.8-py3-none-any.whl
Algorithm Hash digest
SHA256 4aa8d6079262ed5507214a0c3901711d54b5b7d240d8aac8f8d7278cd2fb9f5f
MD5 dca5c400e1504acbc89ab3592547cb46
BLAKE2b-256 40585656c6a0b0cae03e2f877b321503174833293f1a585bd8a04216acbcf09a

See more details on using hashes here.

Provenance

The following attestation bundles were made for spyllm-0.3.8-py3-none-any.whl:

Publisher: sdk-publish.yml on Yemnis/spyllm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page