Skip to main content

Agent observability that traces decisions, not just API calls

Project description

AgentLens Python SDK

PyPI version Python 3.9+ License: MIT

Agent observability that traces decisions, not just API calls.

What is AgentLens?

AgentLens is an observability SDK for AI agents. Unlike generic LLM tracing tools that only capture request/response pairs, AgentLens captures the decision points your agent makes: which tool it chose, what alternatives it considered, and why. This gives you a decision tree view of agent behavior, not just a flat log of API calls.

Quick Start

pip install vectry-agentlens
import agentlens

# Initialize once at startup
agentlens.init(api_key="your-api-key")

# Trace any function with a decorator
@agentlens.trace(name="research-agent", tags=["research"])
def research(topic: str) -> str:
    # Your agent logic here
    return f"Results for: {topic}"

research("quantum computing")

The @trace decorator works with both sync and async functions. Traces are batched and sent to the AgentLens API automatically.

You can also use trace as a context manager:

with agentlens.trace(name="my-operation", session_id="user-123"):
    # Everything inside is traced
    result = do_work()

Integrations

OpenAI

Auto-capture every chat.completions.create call with token counts, cost estimation, and tool-call decisions.

pip install vectry-agentlens[openai]
import openai
import agentlens
from agentlens.integrations.openai import wrap_openai

agentlens.init(api_key="your-api-key")

client = openai.OpenAI()
client = wrap_openai(client)

# All calls are now traced automatically
@agentlens.trace(name="assistant")
def ask(question: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}],
    )
    return response.choices[0].message.content

ask("What is the capital of France?")

wrap_openai instruments the client in-place. For each call it creates an LLM span with:

  • Model name, temperature, and max_tokens
  • Token usage (prompt, completion, total)
  • Estimated cost in USD (built-in pricing for GPT-4, GPT-4o, GPT-3.5-turbo variants)
  • Automatic TOOL_SELECTION decision points when the model invokes function/tool calls

Streaming is also supported transparently.

LangChain

Drop in a callback handler to trace chains, agents, LLM calls, and tool invocations.

pip install vectry-agentlens[langchain]
import agentlens
from agentlens.integrations.langchain import AgentLensCallbackHandler

agentlens.init(api_key="your-api-key")

handler = AgentLensCallbackHandler(
    trace_name="langchain-agent",
    tags=["production"],
    session_id="user-456",
)

# Pass the handler to any LangChain chain or agent
result = chain.invoke(
    {"input": "Summarize this document"},
    config={"callbacks": [handler]},
)

The handler automatically creates spans for LLM calls, tool calls, and chain execution. Agent tool selections are logged as TOOL_SELECTION decision points.

It also works inside an existing trace context:

@agentlens.trace(name="my-pipeline")
def run_pipeline(query: str):
    handler = AgentLensCallbackHandler()
    return chain.invoke({"input": query}, config={"callbacks": [handler]})

Custom Agents

For any agent framework (or your own), use log_decision() to record decision points manually.

import agentlens
from agentlens import log_decision

agentlens.init(api_key="your-api-key")

@agentlens.trace(name="routing-agent")
def route_request(query: str) -> str:
    # Your routing logic
    chosen_agent = "research-agent"

    log_decision(
        type="ROUTING",
        chosen={"name": chosen_agent, "confidence": 0.92},
        alternatives=[
            {"name": "support-agent", "confidence": 0.45, "reason_rejected": "not a support query"},
            {"name": "sales-agent", "confidence": 0.12, "reason_rejected": "no purchase intent"},
        ],
        reasoning="Query contains research keywords",
        context_snapshot={"tokens_used": 1200, "tokens_available": 6800},
        cost_usd=0.003,
        duration_ms=45,
    )

    return dispatch(chosen_agent, query)

Decision types include: TOOL_SELECTION, ROUTING, RETRY, ESCALATION, MEMORY_RETRIEVAL, PLANNING, and CUSTOM.

API Reference

agentlens.init()

Initialize the SDK. Call once at application startup.

agentlens.init(
    api_key="your-api-key",           # Required. Your AgentLens API key.
    endpoint="https://...",            # API endpoint (default: https://agentlens.vectry.tech)
    flush_interval=5.0,               # Seconds between batch flushes (default: 5.0)
    max_batch_size=10,                 # Traces per batch before auto-flush (default: 10)
    enabled=True,                      # Set False to disable sending (e.g., in tests)
)

agentlens.trace()

Decorator or context manager that creates a trace (or a nested span if already inside a trace).

# As decorator
@agentlens.trace(name="my-agent", tags=["v2"], session_id="user-789", metadata={"env": "prod"})
def my_agent():
    ...

# As context manager
with agentlens.trace(name="sub-task") as ctx:
    ...
Parameter Type Description
name str Name for the trace or span
tags list[str] Tags for filtering in the dashboard
session_id str Group traces by user session
metadata dict Arbitrary key-value metadata

agentlens.log_decision()

Record a decision point inside an active trace.

agentlens.log_decision(
    type="TOOL_SELECTION",
    chosen={"name": "search", "confidence": 0.95},
    alternatives=[{"name": "calculator", "confidence": 0.3}],
    reasoning="User asked a factual question",
    context_snapshot={"tokens_used": 500},
    cost_usd=0.001,
    duration_ms=23,
)
Parameter Type Description
type str Decision type (see types above)
chosen dict The selected option (name, confidence, etc.)
alternatives list[dict] Rejected options with reasons
reasoning str Why this option was chosen
context_snapshot dict State at decision time (tokens, memory, etc.)
cost_usd float Cost of this decision in USD
duration_ms int Time taken to make the decision

wrap_openai()

Instrument an OpenAI client for automatic tracing.

from agentlens.integrations.openai import wrap_openai

client = wrap_openai(openai.OpenAI())

Returns the same client instance with chat.completions.create wrapped. All calls automatically generate LLM spans and tool-call decision points.

AgentLensCallbackHandler

LangChain callback handler for automatic tracing.

from agentlens.integrations.langchain import AgentLensCallbackHandler

handler = AgentLensCallbackHandler(
    trace_name="my-chain",     # Trace name (if no active trace exists)
    tags=["prod"],             # Optional tags
    session_id="user-123",     # Optional session ID
)

Architecture

Your Agent Code
    │
    ▼
AgentLens SDK  ──  @trace, log_decision(), wrap_openai()
    │
    ▼
Batched HTTP Transport  ──  Collects traces, flushes every 5s or 10 traces
    │
    ▼
AgentLens API  ──  https://agentlens.vectry.tech/api/traces
    │
    ▼
Dashboard  ──  Decision trees, analytics, real-time streaming

The SDK is lightweight and non-blocking. Traces are serialized and batched in a background thread, so your agent code is never slowed down by observability.

Dashboard

View your traces at agentlens.vectry.tech:

  • Decision Trees - Visualize the full decision path of every agent run
  • Analytics - Token usage, cost breakdowns, latency percentiles
  • Real-time Streaming - Watch agent decisions as they happen
  • Session Grouping - Track multi-turn conversations by session ID

Development

cd packages/sdk-python
pip install -e ".[all]"  # installs openai + langchain extras
python -m pytest tests/ -v

License

MIT - Built by Vectry

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vectry_agentlens-0.1.0.tar.gz (18.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vectry_agentlens-0.1.0-py3-none-any.whl (19.7 kB view details)

Uploaded Python 3

File details

Details for the file vectry_agentlens-0.1.0.tar.gz.

File metadata

  • Download URL: vectry_agentlens-0.1.0.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for vectry_agentlens-0.1.0.tar.gz
Algorithm Hash digest
SHA256 3167b0e2c510fe077dfaf05c81dff69afb9ecb9e475f68a966b60021482800c4
MD5 2fa0ba19f30b381aae062b372dddf167
BLAKE2b-256 9439c599da513a5a3ead1329314ad63497b15561e79072d9b702ae40c8b769ef

See more details on using hashes here.

File details

Details for the file vectry_agentlens-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for vectry_agentlens-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e0fc691ae4bcb9d0bbe087d1eb0ecc5c0fda807d4f634d3e00b0e05ff8ce9024
MD5 1db5db605c9c4daf7cdb30d5357f0974
BLAKE2b-256 702cd4d4a40c043f817a652c9aeac5b2681c68e2743935eca29e4fbd1fccc837

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page