AgenSights SDK - AI Agent Observability. Zero-friction tracking for LLM calls, agents, and tools.
Project description
AgenSights Python SDK
Python SDK for AgenSights - AI Agent Observability.
Track LLM calls, tool invocations, and multi-step agent executions with zero-friction auto-instrumentation or manual tracking.
Installation
pip install agensights
Or install from source:
pip install -e .
Quick Start — Universal Init (Recommended)
One line at the top of your app patches every supported LLM provider automatically:
import agensights
agensights.init(api_key="sk-dev-xxx")
# That's it. Every OpenAI, Anthropic, Bedrock, Google, Mistral,
# Cohere, and LiteLLM call is now tracked automatically.
You can also configure via environment variables (no code changes needed):
export AGENSIGHTS_API_KEY="sk-dev-xxx"
export AGENSIGHTS_BASE_URL="https://api.agensights.dev/api/v1"
import agensights
agensights.init() # picks up from env vars
Auto-Instrumentation (Per-Client)
Wrap your LLM client once and every call is tracked automatically.
OpenAI
from openai import OpenAI
from agensights import instrument_openai
client = instrument_openai(
OpenAI(api_key="sk-xxx"),
agensights_api_key="sk-dev-xxx",
agent_name="my-assistant",
)
# Every call is now automatically tracked
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
# Embeddings are tracked too
embeddings = client.embeddings.create(
model="text-embedding-3-small",
input="Hello world",
)
Anthropic
import anthropic
from agensights import instrument_anthropic
client = instrument_anthropic(
anthropic.Anthropic(api_key="sk-ant-xxx"),
agensights_api_key="sk-dev-xxx",
agent_name="claude-agent",
)
# Automatically tracked
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
)
LangChain
from langchain_openai import ChatOpenAI
from agensights.integrations import LangChainCallbackHandler
handler = LangChainCallbackHandler(
api_key="sk-dev-xxx",
agent_name="langchain-agent",
)
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
# All LLM and tool calls are tracked via callbacks
response = llm.invoke("Hello!")
Agent Hierarchy Tracking
Track multi-agent workflows with automatic parent-child relationships:
from agensights import instrument_openai
from openai import OpenAI
client = instrument_openai(OpenAI(), agensights_api_key="sk-dev-xxx")
with client.trace("find_laptop") as trace:
with trace.agent("planner") as planner:
with planner.agent("researcher") as researcher:
with researcher.tool("web_search"):
results = do_search("laptops") # latency auto-measured
# LLM call auto-captured under researcher agent
summary = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Summarize: {results}"}],
)
with planner.agent("writer") as writer:
result = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write recommendation"}],
)
This produces a full trace tree in the dashboard with parent-child spans linked automatically.
Manual Tracking
For full control, use the AgenSights client directly.
Single Calls
from agensights import AgenSights
client = AgenSights(api_key="sk-prod-xxx")
# Track a single LLM call
client.track_llm(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)
# Track a tool call
client.track_tool(tool_name="web_search", latency_ms=150)
# Always close when done
client.close()
Tracing Multi-Step Executions
Use client.trace() to group related calls under a single trace:
from agensights import AgenSights
client = AgenSights(api_key="sk-prod-xxx")
with client.trace("support_agent", workflow_id="ticket-456") as t:
# Track an LLM call
t.llm_call(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)
# Track a tool call
t.tool_call(tool_name="web_search", latency_ms=150)
# Use spans for automatic duration tracking
with t.span("data_processing") as s:
# ... your code here ...
pass # duration is recorded automatically
client.close()
Nested Agent Spans
with client.trace("orchestrator") as t:
planner = t.agent("planner")
researcher = planner.agent("researcher") # sub-agent
researcher.tool(name="search_api", latency_ms=150)
researcher.llm_call(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)
writer = planner.agent("writer")
writer.llm_call(model="claude-3-5-sonnet", input_tokens=200, output_tokens=100, latency_ms=400)
Using the Client as a Context Manager
with AgenSights(api_key="sk-prod-xxx") as client:
client.track_llm(model="gpt-4o", input_tokens=100, output_tokens=50, latency_ms=300)
# Client is automatically closed and flushed
Configuration
Environment Variables
| Variable | Description |
|---|---|
AGENSIGHTS_API_KEY |
Your AgenSights API key (used when api_key is not passed) |
AGENSIGHTS_BASE_URL |
Backend API base URL (default: https://api.agensights.com/api/v1) |
Client Parameters
| Parameter | Default | Description |
|---|---|---|
api_key |
AGENSIGHTS_API_KEY env var |
Your AgenSights API key |
base_url |
AGENSIGHTS_BASE_URL env var |
Backend API base URL |
Auto-Instrumentation Parameters
| Parameter | Default | Description |
|---|---|---|
agensights_api_key |
None |
API key (or pass agensights_client instead) |
agensights_client |
None |
Pre-configured AgenSights instance |
agent_name |
None |
Name to tag all events with |
base_url |
None |
Override backend URL (falls back to env var) |
Error Tracking
Errors are automatically captured during auto-instrumentation. For manual tracking:
client.track_llm(
model="gpt-4o",
input_tokens=100,
output_tokens=0,
latency_ms=500,
status="error",
error_code="rate_limit",
)
How It Works
- Universal init (
agensights.init()) patches all supported LLM providers at the module level. - Auto-instrumentation wraps LLM client methods (e.g.,
chat.completions.create) to capture model, tokens, latency, and errors transparently. - Events are buffered locally and sent in batches to the AgenSights backend.
- The buffer flushes automatically every 5 seconds or when 100 events are accumulated.
- Call
client.flush()to force an immediate send. - Call
client.close()to flush and release resources.
Supported Providers
| Provider | agensights.init() |
instrument_*() |
|---|---|---|
| OpenAI | Auto-patched | instrument_openai() |
| Anthropic | Auto-patched | instrument_anthropic() |
| AWS Bedrock | Auto-patched | via init() |
| Google Gemini | Auto-patched | via init() |
| Mistral AI | Auto-patched | via init() |
| Cohere | Auto-patched | via init() |
| LiteLLM | Auto-patched | via init() |
| LangChain | — | LangChainCallbackHandler |
| CrewAI | — | CrewAITracker |
| AutoGen | — | AutoGenTracker |
| Google ADK | — | GoogleADKTracker |
Development
pip install -e ".[dev]"
pytest
License
MIT - see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agensights-0.5.0.tar.gz.
File metadata
- Download URL: agensights-0.5.0.tar.gz
- Upload date:
- Size: 23.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0cef6a61c0c4d4f10584cc5c53974aa61893b16eb066bc00ed089285de68073c
|
|
| MD5 |
cfb09440246f1da37a465b32223bd01d
|
|
| BLAKE2b-256 |
6f8f5c3072f34529575bcda65724061416e4152cb7432caa9d2b3ac15dbecdc7
|
File details
Details for the file agensights-0.5.0-py3-none-any.whl.
File metadata
- Download URL: agensights-0.5.0-py3-none-any.whl
- Upload date:
- Size: 21.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66283819b94b42c5af1ee9f41b0ee45ce407b86502aa98cafb5691a201376c5f
|
|
| MD5 |
9e1057829284b6d47c1174a67c24c1e6
|
|
| BLAKE2b-256 |
43cf1be0fe481abfa13a0921b2175539d13de8b9dfe06e35c6cc64f94810442c
|