Skip to main content

Agent observability that traces decisions, not just API calls

Project description

AgentLens Python SDK

PyPI version Python 3.9+ License: MIT

Agent observability that traces decisions, not just API calls.

What is AgentLens?

AgentLens is an observability SDK for AI agents. Unlike generic LLM tracing tools that only capture request/response pairs, AgentLens captures the decision points your agent makes: which tool it chose, what alternatives it considered, and why. This gives you a decision tree view of agent behavior, not just a flat log of API calls.

Quick Start

First, create an account at agentlens.vectry.tech/register and generate an API key in Settings > API Keys in the dashboard.

pip install vectry-agentlens
import agentlens

# Initialize with the API key from Settings > API Keys
agentlens.init(api_key="your-api-key")

# Trace any function with a decorator
@agentlens.trace(name="research-agent", tags=["research"])
def research(topic: str) -> str:
    # Your agent logic here
    return f"Results for: {topic}"

research("quantum computing")

The @trace decorator works with both sync and async functions. Traces are batched and sent to the AgentLens API automatically.

You can also use trace as a context manager:

with agentlens.trace(name="my-operation", session_id="user-123"):
    # Everything inside is traced
    result = do_work()

Integrations

OpenAI

Auto-capture every chat.completions.create call with token counts, cost estimation, and tool-call decisions.

pip install vectry-agentlens[openai]
import openai
import agentlens
from agentlens.integrations.openai import wrap_openai

agentlens.init(api_key="your-api-key")

client = openai.OpenAI()
client = wrap_openai(client)

# All calls are now traced automatically
@agentlens.trace(name="assistant")
def ask(question: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}],
    )
    return response.choices[0].message.content

ask("What is the capital of France?")

wrap_openai instruments the client in-place. For each call it creates an LLM span with:

  • Model name, temperature, and max_tokens
  • Token usage (prompt, completion, total)
  • Estimated cost in USD (built-in pricing for GPT-4, GPT-4o, GPT-3.5-turbo variants)
  • Automatic TOOL_SELECTION decision points when the model invokes function/tool calls

Streaming is also supported transparently.

LangChain

Drop in a callback handler to trace chains, agents, LLM calls, and tool invocations.

pip install vectry-agentlens[langchain]
import agentlens
from agentlens.integrations.langchain import AgentLensCallbackHandler

agentlens.init(api_key="your-api-key")

handler = AgentLensCallbackHandler(
    trace_name="langchain-agent",
    tags=["production"],
    session_id="user-456",
)

# Pass the handler to any LangChain chain or agent
result = chain.invoke(
    {"input": "Summarize this document"},
    config={"callbacks": [handler]},
)

The handler automatically creates spans for LLM calls, tool calls, and chain execution. Agent tool selections are logged as TOOL_SELECTION decision points.

It also works inside an existing trace context:

@agentlens.trace(name="my-pipeline")
def run_pipeline(query: str):
    handler = AgentLensCallbackHandler()
    return chain.invoke({"input": query}, config={"callbacks": [handler]})

Custom Agents

For any agent framework (or your own), use log_decision() to record decision points manually.

import agentlens
from agentlens import log_decision

agentlens.init(api_key="your-api-key")

@agentlens.trace(name="routing-agent")
def route_request(query: str) -> str:
    # Your routing logic
    chosen_agent = "research-agent"

    log_decision(
        type="ROUTING",
        chosen={"name": chosen_agent, "confidence": 0.92},
        alternatives=[
            {"name": "support-agent", "confidence": 0.45, "reason_rejected": "not a support query"},
            {"name": "sales-agent", "confidence": 0.12, "reason_rejected": "no purchase intent"},
        ],
        reasoning="Query contains research keywords",
        context_snapshot={"tokens_used": 1200, "tokens_available": 6800},
        cost_usd=0.003,
        duration_ms=45,
    )

    return dispatch(chosen_agent, query)

Decision types include: TOOL_SELECTION, ROUTING, RETRY, ESCALATION, MEMORY_RETRIEVAL, PLANNING, and CUSTOM.

API Reference

agentlens.init()

Initialize the SDK. Call once at application startup.

agentlens.init(
    api_key="your-api-key",           # Required. Create at Settings > API Keys in the dashboard.
    endpoint="https://...",            # API endpoint (default: https://agentlens.vectry.tech)
    flush_interval=5.0,               # Seconds between batch flushes (default: 5.0)
    max_batch_size=10,                 # Traces per batch before auto-flush (default: 10)
    enabled=True,                      # Set False to disable sending (e.g., in tests)
)

You can also set the API key via the AGENTLENS_API_KEY environment variable instead of passing it directly.

agentlens.trace()

Decorator or context manager that creates a trace (or a nested span if already inside a trace).

# As decorator
@agentlens.trace(name="my-agent", tags=["v2"], session_id="user-789", metadata={"env": "prod"})
def my_agent():
    ...

# As context manager
with agentlens.trace(name="sub-task") as ctx:
    ...
Parameter Type Description
name str Name for the trace or span
tags list[str] Tags for filtering in the dashboard
session_id str Group traces by user session
metadata dict Arbitrary key-value metadata

agentlens.log_decision()

Record a decision point inside an active trace.

agentlens.log_decision(
    type="TOOL_SELECTION",
    chosen={"name": "search", "confidence": 0.95},
    alternatives=[{"name": "calculator", "confidence": 0.3}],
    reasoning="User asked a factual question",
    context_snapshot={"tokens_used": 500},
    cost_usd=0.001,
    duration_ms=23,
)
Parameter Type Description
type str Decision type (see types above)
chosen dict The selected option (name, confidence, etc.)
alternatives list[dict] Rejected options with reasons
reasoning str Why this option was chosen
context_snapshot dict State at decision time (tokens, memory, etc.)
cost_usd float Cost of this decision in USD
duration_ms int Time taken to make the decision

wrap_openai()

Instrument an OpenAI client for automatic tracing.

from agentlens.integrations.openai import wrap_openai

client = wrap_openai(openai.OpenAI())

Returns the same client instance with chat.completions.create wrapped. All calls automatically generate LLM spans and tool-call decision points.

AgentLensCallbackHandler

LangChain callback handler for automatic tracing.

from agentlens.integrations.langchain import AgentLensCallbackHandler

handler = AgentLensCallbackHandler(
    trace_name="my-chain",     # Trace name (if no active trace exists)
    tags=["prod"],             # Optional tags
    session_id="user-123",     # Optional session ID
)

Architecture

Your Agent Code
    │
    ▼
AgentLens SDK  ──  @trace, log_decision(), wrap_openai()
    │
    ▼
Batched HTTP Transport  ──  Collects traces, flushes every 5s or 10 traces
    │
    ▼
AgentLens API  ──  https://agentlens.vectry.tech/api/traces
    │
    ▼
Dashboard  ──  Decision trees, analytics, real-time streaming

The SDK is lightweight and non-blocking. Traces are serialized and batched in a background thread, so your agent code is never slowed down by observability.

Dashboard

View your traces at agentlens.vectry.tech (login required):

  • Decision Trees - Visualize the full decision path of every agent run
  • Analytics - Token usage, cost breakdowns, latency percentiles
  • Real-time Streaming - Watch agent decisions as they happen
  • Session Grouping - Track multi-turn conversations by session ID

Billing

Each trace counts as one session for billing. AgentLens cloud offers three tiers:

Plan Price Sessions
Free $0 20 sessions/day
Starter $5/month 1,000 sessions/month
Pro $20/month 100,000 sessions/month

Manage your subscription in Settings > Billing. Self-hosted instances have no session limits.

Development

cd packages/sdk-python
pip install -e ".[all]"  # installs openai + langchain extras
python -m pytest tests/ -v

License

MIT - Built by Vectry

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vectry_agentlens-0.2.8.tar.gz (23.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vectry_agentlens-0.2.8-py3-none-any.whl (25.7 kB view details)

Uploaded Python 3

File details

Details for the file vectry_agentlens-0.2.8.tar.gz.

File metadata

  • Download URL: vectry_agentlens-0.2.8.tar.gz
  • Upload date:
  • Size: 23.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for vectry_agentlens-0.2.8.tar.gz
Algorithm Hash digest
SHA256 4b6000ca99204bfa70985b00cd1690edc3d61555e47de72ea6b83949db92b007
MD5 45e8ea0bab0ead0ab398e268d4709679
BLAKE2b-256 b987aba1bfb847ff6e4996573bcedd3a7dd6653cc5929700ee7db8a28f13904e

See more details on using hashes here.

File details

Details for the file vectry_agentlens-0.2.8-py3-none-any.whl.

File metadata

File hashes

Hashes for vectry_agentlens-0.2.8-py3-none-any.whl
Algorithm Hash digest
SHA256 48aaa1eed646bb412c1bce6d9a14b3a4a516041eb290453907892716627c16d3
MD5 ab7020cf321bd9b5a99dd76c26acee4d
BLAKE2b-256 83b3e53ff71c7d0f97153c99b922ad6b93347db6f54bb30952ac435d6678eb99

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page