Skip to main content

Agent observability that traces decisions, not just API calls

Project description

AgentLens Python SDK

PyPI version Python 3.9+ License: MIT

Agent observability that traces decisions, not just API calls.

What is AgentLens?

AgentLens is an observability SDK for AI agents. Unlike generic LLM tracing tools that only capture request/response pairs, AgentLens captures the decision points your agent makes: which tool it chose, what alternatives it considered, and why. This gives you a decision tree view of agent behavior, not just a flat log of API calls.

Quick Start

pip install vectry-agentlens
import agentlens

# Initialize once at startup
agentlens.init(api_key="your-api-key")

# Trace any function with a decorator
@agentlens.trace(name="research-agent", tags=["research"])
def research(topic: str) -> str:
    # Your agent logic here
    return f"Results for: {topic}"

research("quantum computing")

The @trace decorator works with both sync and async functions. Traces are batched and sent to the AgentLens API automatically.

You can also use trace as a context manager:

with agentlens.trace(name="my-operation", session_id="user-123"):
    # Everything inside is traced
    result = do_work()

Integrations

OpenAI

Auto-capture every chat.completions.create call with token counts, cost estimation, and tool-call decisions.

pip install vectry-agentlens[openai]
import openai
import agentlens
from agentlens.integrations.openai import wrap_openai

agentlens.init(api_key="your-api-key")

client = openai.OpenAI()
client = wrap_openai(client)

# All calls are now traced automatically
@agentlens.trace(name="assistant")
def ask(question: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}],
    )
    return response.choices[0].message.content

ask("What is the capital of France?")

wrap_openai instruments the client in-place. For each call it creates an LLM span with:

  • Model name, temperature, and max_tokens
  • Token usage (prompt, completion, total)
  • Estimated cost in USD (built-in pricing for GPT-4, GPT-4o, GPT-3.5-turbo variants)
  • Automatic TOOL_SELECTION decision points when the model invokes function/tool calls

Streaming is also supported transparently.

LangChain

Drop in a callback handler to trace chains, agents, LLM calls, and tool invocations.

pip install vectry-agentlens[langchain]
import agentlens
from agentlens.integrations.langchain import AgentLensCallbackHandler

agentlens.init(api_key="your-api-key")

handler = AgentLensCallbackHandler(
    trace_name="langchain-agent",
    tags=["production"],
    session_id="user-456",
)

# Pass the handler to any LangChain chain or agent
result = chain.invoke(
    {"input": "Summarize this document"},
    config={"callbacks": [handler]},
)

The handler automatically creates spans for LLM calls, tool calls, and chain execution. Agent tool selections are logged as TOOL_SELECTION decision points.

It also works inside an existing trace context:

@agentlens.trace(name="my-pipeline")
def run_pipeline(query: str):
    handler = AgentLensCallbackHandler()
    return chain.invoke({"input": query}, config={"callbacks": [handler]})

Custom Agents

For any agent framework (or your own), use log_decision() to record decision points manually.

import agentlens
from agentlens import log_decision

agentlens.init(api_key="your-api-key")

@agentlens.trace(name="routing-agent")
def route_request(query: str) -> str:
    # Your routing logic
    chosen_agent = "research-agent"

    log_decision(
        type="ROUTING",
        chosen={"name": chosen_agent, "confidence": 0.92},
        alternatives=[
            {"name": "support-agent", "confidence": 0.45, "reason_rejected": "not a support query"},
            {"name": "sales-agent", "confidence": 0.12, "reason_rejected": "no purchase intent"},
        ],
        reasoning="Query contains research keywords",
        context_snapshot={"tokens_used": 1200, "tokens_available": 6800},
        cost_usd=0.003,
        duration_ms=45,
    )

    return dispatch(chosen_agent, query)

Decision types include: TOOL_SELECTION, ROUTING, RETRY, ESCALATION, MEMORY_RETRIEVAL, PLANNING, and CUSTOM.

API Reference

agentlens.init()

Initialize the SDK. Call once at application startup.

agentlens.init(
    api_key="your-api-key",           # Required. Your AgentLens API key.
    endpoint="https://...",            # API endpoint (default: https://agentlens.vectry.tech)
    flush_interval=5.0,               # Seconds between batch flushes (default: 5.0)
    max_batch_size=10,                 # Traces per batch before auto-flush (default: 10)
    enabled=True,                      # Set False to disable sending (e.g., in tests)
)

agentlens.trace()

Decorator or context manager that creates a trace (or a nested span if already inside a trace).

# As decorator
@agentlens.trace(name="my-agent", tags=["v2"], session_id="user-789", metadata={"env": "prod"})
def my_agent():
    ...

# As context manager
with agentlens.trace(name="sub-task") as ctx:
    ...
Parameter Type Description
name str Name for the trace or span
tags list[str] Tags for filtering in the dashboard
session_id str Group traces by user session
metadata dict Arbitrary key-value metadata

agentlens.log_decision()

Record a decision point inside an active trace.

agentlens.log_decision(
    type="TOOL_SELECTION",
    chosen={"name": "search", "confidence": 0.95},
    alternatives=[{"name": "calculator", "confidence": 0.3}],
    reasoning="User asked a factual question",
    context_snapshot={"tokens_used": 500},
    cost_usd=0.001,
    duration_ms=23,
)
Parameter Type Description
type str Decision type (see types above)
chosen dict The selected option (name, confidence, etc.)
alternatives list[dict] Rejected options with reasons
reasoning str Why this option was chosen
context_snapshot dict State at decision time (tokens, memory, etc.)
cost_usd float Cost of this decision in USD
duration_ms int Time taken to make the decision

wrap_openai()

Instrument an OpenAI client for automatic tracing.

from agentlens.integrations.openai import wrap_openai

client = wrap_openai(openai.OpenAI())

Returns the same client instance with chat.completions.create wrapped. All calls automatically generate LLM spans and tool-call decision points.

AgentLensCallbackHandler

LangChain callback handler for automatic tracing.

from agentlens.integrations.langchain import AgentLensCallbackHandler

handler = AgentLensCallbackHandler(
    trace_name="my-chain",     # Trace name (if no active trace exists)
    tags=["prod"],             # Optional tags
    session_id="user-123",     # Optional session ID
)

Architecture

Your Agent Code
    │
    ▼
AgentLens SDK  ──  @trace, log_decision(), wrap_openai()
    │
    ▼
Batched HTTP Transport  ──  Collects traces, flushes every 5s or 10 traces
    │
    ▼
AgentLens API  ──  https://agentlens.vectry.tech/api/traces
    │
    ▼
Dashboard  ──  Decision trees, analytics, real-time streaming

The SDK is lightweight and non-blocking. Traces are serialized and batched in a background thread, so your agent code is never slowed down by observability.

Dashboard

View your traces at agentlens.vectry.tech:

  • Decision Trees - Visualize the full decision path of every agent run
  • Analytics - Token usage, cost breakdowns, latency percentiles
  • Real-time Streaming - Watch agent decisions as they happen
  • Session Grouping - Track multi-turn conversations by session ID

Development

cd packages/sdk-python
pip install -e ".[all]"  # installs openai + langchain extras
python -m pytest tests/ -v

License

MIT - Built by Vectry

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vectry_agentlens-0.1.1.tar.gz (22.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vectry_agentlens-0.1.1-py3-none-any.whl (25.4 kB view details)

Uploaded Python 3

File details

Details for the file vectry_agentlens-0.1.1.tar.gz.

File metadata

  • Download URL: vectry_agentlens-0.1.1.tar.gz
  • Upload date:
  • Size: 22.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for vectry_agentlens-0.1.1.tar.gz
Algorithm Hash digest
SHA256 028a5245e30cb3a34eb72d615b8941cbe152b63da1903c29eeb1cd6c268f414b
MD5 626d3f4ea635fe13d5c596f77f0a24e1
BLAKE2b-256 fa7aa818748e73f7eb449dc9eec2619e83c44c95275ae2216dd982eafa3cccc3

See more details on using hashes here.

File details

Details for the file vectry_agentlens-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for vectry_agentlens-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fccbebcb3fc61fa6a21e9cb26920f709b3f58c01d78e9e67126cbe73c989f6b4
MD5 d3521777c337a9f74adcff1bca328c4d
BLAKE2b-256 27c86c59f839a32af5fdbf8e0c217fd884ddd7a9b1f451c85d4ac3bd1f1c768a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page