Skip to main content

AgenSights SDK - AI Agent Observability. Zero-friction tracking for LLM calls, agents, and tools.

Project description

AgenSights Python SDK

Python SDK for AgenSights - AI Agent Observability.

Track LLM calls, tool invocations, and multi-step agent executions with zero-friction auto-instrumentation or manual tracking.

Installation

pip install agensights

Or install from source:

pip install -e .

Quick Start — Universal Init (Recommended)

One line at the top of your app patches every supported LLM provider automatically:

import agensights

agensights.init(api_key="sk-dev-xxx")

# That's it. Every OpenAI, Anthropic, Bedrock, Google, Mistral,
# Cohere, and LiteLLM call is now tracked automatically.

You can also configure via environment variables (no code changes needed):

export AGENSIGHTS_API_KEY="sk-dev-xxx"
export AGENSIGHTS_BASE_URL="https://api.agensights.dev/api/v1"
import agensights
agensights.init()  # picks up from env vars

Auto-Instrumentation (Per-Client)

Wrap your LLM client once and every call is tracked automatically.

OpenAI

from openai import OpenAI
from agensights import instrument_openai

client = instrument_openai(
    OpenAI(api_key="sk-xxx"),
    agensights_api_key="sk-dev-xxx",
    agent_name="my-assistant",
)

# Every call is now automatically tracked
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

# Embeddings are tracked too
embeddings = client.embeddings.create(
    model="text-embedding-3-small",
    input="Hello world",
)

Anthropic

import anthropic
from agensights import instrument_anthropic

client = instrument_anthropic(
    anthropic.Anthropic(api_key="sk-ant-xxx"),
    agensights_api_key="sk-dev-xxx",
    agent_name="claude-agent",
)

# Automatically tracked
message = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}],
)

LangChain

from langchain_openai import ChatOpenAI
from agensights.integrations import LangChainCallbackHandler

handler = LangChainCallbackHandler(
    api_key="sk-dev-xxx",
    agent_name="langchain-agent",
)

llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])

# All LLM and tool calls are tracked via callbacks
response = llm.invoke("Hello!")

Agent Hierarchy Tracking

Track multi-agent workflows with automatic parent-child relationships:

from agensights import instrument_openai
from openai import OpenAI

client = instrument_openai(OpenAI(), agensights_api_key="sk-dev-xxx")

with client.trace("find_laptop") as trace:
    with trace.agent("planner") as planner:
        with planner.agent("researcher") as researcher:
            with researcher.tool("web_search"):
                results = do_search("laptops")  # latency auto-measured
            # LLM call auto-captured under researcher agent
            summary = client.chat.completions.create(
                model="gpt-4o",
                messages=[{"role": "user", "content": f"Summarize: {results}"}],
            )
        with planner.agent("writer") as writer:
            result = client.chat.completions.create(
                model="gpt-4o",
                messages=[{"role": "user", "content": "Write recommendation"}],
            )

This produces a full trace tree in the dashboard with parent-child spans linked automatically.

Manual Tracking

For full control, use the AgenSights client directly.

Single Calls

from agensights import AgenSights

client = AgenSights(api_key="sk-prod-xxx")

# Track a single LLM call
client.track_llm(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)

# Track a tool call
client.track_tool(tool_name="web_search", latency_ms=150)

# Always close when done
client.close()

Tracing Multi-Step Executions

Use client.trace() to group related calls under a single trace:

from agensights import AgenSights

client = AgenSights(api_key="sk-prod-xxx")

with client.trace("support_agent", workflow_id="ticket-456") as t:
    # Track an LLM call
    t.llm_call(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)

    # Track a tool call
    t.tool_call(tool_name="web_search", latency_ms=150)

    # Use spans for automatic duration tracking
    with t.span("data_processing") as s:
        # ... your code here ...
        pass  # duration is recorded automatically

client.close()

Nested Agent Spans

with client.trace("orchestrator") as t:
    planner = t.agent("planner")
    researcher = planner.agent("researcher")  # sub-agent
    researcher.tool(name="search_api", latency_ms=150)
    researcher.llm_call(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)

    writer = planner.agent("writer")
    writer.llm_call(model="claude-3-5-sonnet", input_tokens=200, output_tokens=100, latency_ms=400)

Using the Client as a Context Manager

with AgenSights(api_key="sk-prod-xxx") as client:
    client.track_llm(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)
# Client is automatically closed and flushed

Configuration

Environment Variables

Variable Description
AGENSIGHTS_API_KEY Your AgenSights API key (used when api_key is not passed)
AGENSIGHTS_BASE_URL Backend API base URL (default: https://api.agensights.com/api/v1)

Client Parameters

Parameter Default Description
api_key AGENSIGHTS_API_KEY env var Your AgenSights API key
base_url AGENSIGHTS_BASE_URL env var Backend API base URL

Auto-Instrumentation Parameters

Parameter Default Description
agensights_api_key None API key (or pass agensights_client instead)
agensights_client None Pre-configured AgenSights instance
agent_name None Name to tag all events with
base_url None Override backend URL (falls back to env var)

Error Tracking

Errors are automatically captured during auto-instrumentation. For manual tracking:

client.track_llm(
    model="gpt-4o",
    input_tokens=100,
    output_tokens=0,
    latency_ms=500,
    status="error",
    error_code="rate_limit",
)

How It Works

  • Universal init (agensights.init()) patches all supported LLM providers at the module level.
  • Auto-instrumentation wraps LLM client methods (e.g., chat.completions.create) to capture model, tokens, latency, and errors transparently.
  • Events are buffered locally and sent in batches to the AgenSights backend.
  • The buffer flushes automatically every 5 seconds or when 100 events are accumulated.
  • Call client.flush() to force an immediate send.
  • Call client.close() to flush and release resources.

Supported Providers

Provider agensights.init() instrument_*()
OpenAI Auto-patched instrument_openai()
Anthropic Auto-patched instrument_anthropic()
AWS Bedrock Auto-patched via init()
Google Gemini Auto-patched via init()
Mistral AI Auto-patched via init()
Cohere Auto-patched via init()
LiteLLM Auto-patched via init()
LangChain LangChainCallbackHandler
CrewAI CrewAITracker
AutoGen AutoGenTracker
Google ADK GoogleADKTracker

Development

pip install -e ".[dev]"
pytest

License

MIT - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agensights-0.6.0.tar.gz (24.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agensights-0.6.0-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file agensights-0.6.0.tar.gz.

File metadata

  • Download URL: agensights-0.6.0.tar.gz
  • Upload date:
  • Size: 24.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for agensights-0.6.0.tar.gz
Algorithm Hash digest
SHA256 7b7f33c030894502aebbef66546db3b6f44679af9fc10edd60db09e81e9cb224
MD5 63e634103f702ec193192cbcd52e0197
BLAKE2b-256 ccb470a3b409c1526226c7153e9be95a54eb199f31dfac87fbe8fe997095fdcf

See more details on using hashes here.

File details

Details for the file agensights-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: agensights-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for agensights-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ff190b99bcb55929adb2d1ec7b37b811d397f776f4bcf79984bd9acf14e9ec38
MD5 ee5e921bfffd5baf5c19f30a4b8cabf9
BLAKE2b-256 b1245f3ea547f2bfe3b0f9de8bacd763a397d8f1d5b519931489f40ab4dc9930

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page