Skip to main content

AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more

Project description

Visibe Analytics

Visibe SDK for Python

Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, Anthropic, or direct OpenAI calls.

PyPI version Python


🚀 Quick Start

pip install visibe

Get your API key at app.visibe.ai → Settings → API Keys, then add one line to your app:

import visibe

visibe.init(api_key="sk_live_your_key_here")

That's it. Every OpenAI, Anthropic, LangChain, LangGraph, CrewAI, AutoGen, and Bedrock call is automatically traced from this point on — no wrappers, no config changes.

The API key can also be set via the VISIBE_API_KEY environment variable so you don't need to pass it in code:

export VISIBE_API_KEY=sk_live_your_key_here
import visibe
visibe.init()

🧩 Supported Frameworks

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.


⚙️ Configuration

Option Type Default Description
api_key str Your Visibe API key. Falls back to VISIBE_API_KEY env var
redact_content bool False Omit prompt and completion text from all traces. Only metadata is sent (tokens, cost, duration, model, errors).
frameworks list[str] All detected Limit auto-instrumentation to specific frameworks (e.g. ["openai", "langgraph"])
debug bool False Enable debug logging

Environment Variables

Variable Description Default
VISIBE_API_KEY Your API key (required)
VISIBE_API_URL Override API endpoint https://api.visibe.ai
VISIBE_AUTO_INSTRUMENT Comma-separated frameworks to auto-instrument All detected
VISIBE_REDACT_CONTENT Omit prompt/completion text from traces (1 or true) false
VISIBE_DEBUG Enable debug logging (1 to enable) 0

📊 What Gets Tracked

Metric Sent when redact_content=True
Cost, tokens, duration
Model & provider
Tool calls (name, duration, success/failure)
Errors (type, message)
Full execution timeline (spans)
Per-agent and per-task cost breakdown
Prompt text ❌ omitted
Completion text ❌ omitted

When redact_content=True, prompt and completion text never leave your codebase. You retain full observability over costs, performance, and errors.


📖 Integration Examples

OpenAI

import visibe
from openai import OpenAI

visibe.init()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced — cost, tokens, duration, and content captured.

Anthropic

import visibe
import anthropic

visibe.init()

client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced. Streaming (stream=True and .stream()) also supported.

LangChain / LangGraph

import visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

visibe.init()

llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)

result = graph.invoke({"messages": [("user", "Your prompt here")]})
# Automatically traced — agent steps, LLM calls, and tool calls captured.

Dynamic pipe chains (prompt | llm | parser) are also automatically traced. Nested sub-graphs are tracked with hierarchical agent names.

CrewAI

import visibe
from crewai import Agent, Task, Crew

visibe.init()

architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")

task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])

crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])
result = crew.kickoff()
# Single trace with all agents, LLM calls, and per-task cost breakdown.

Training and testing runs (crew.train(), crew.test()) are traced too.

AutoGen

import visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent

visibe.init()

model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")
# Automatically traced.

AWS Bedrock

import visibe
import boto3

visibe.init()

bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.converse(
    modelId="anthropic.claude-3-haiku-20240307-v1:0",
    messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)
# Automatically traced.

Supports converse, converse_stream, invoke_model, and invoke_model_with_response_stream. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.


📖 API Reference

visibe.init()

Call once at the top of your app, before creating any clients. Returns a Visibe instance.

import visibe

visibe.init(api_key="sk_live_abc123")

track()

Groups multiple LLM calls into a single named trace.

from visibe import Visibe

tracer = Visibe()

with tracer.track(client, name="my-conversation"):
    r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
    r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# Both calls appear as spans under one trace.

instrument() / uninstrument()

Manually instrument a specific client instance instead of relying on auto-instrumentation.

from visibe import Visibe

tracer = Visibe(api_key="sk_live_abc123")
tracer.instrument(graph, name="my-agent")

result = graph.invoke({"messages": [("user", "Hello")]})

tracer.uninstrument(graph)

# Or use as a context manager for automatic cleanup:
with tracer.instrument(graph, name="my-agent"):
    graph.invoke(...)
# Instrumentation removed automatically on exit.

🛡️ Safety Guarantees

  • No crashes — every SDK operation is wrapped in try/except
  • No latency — all backend calls are fire-and-forget
  • No key, no problem — SDK is silently a no-op when no API key is set

No data is sold or shared with third parties. Content is used solely to display traces in your dashboard.


🔗 Resources


📃 License

MIT — see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

visibe-0.1.11.tar.gz (108.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

visibe-0.1.11-py3-none-any.whl (81.5 kB view details)

Uploaded Python 3

File details

Details for the file visibe-0.1.11.tar.gz.

File metadata

  • Download URL: visibe-0.1.11.tar.gz
  • Upload date:
  • Size: 108.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for visibe-0.1.11.tar.gz
Algorithm Hash digest
SHA256 cae832e9edc5e88b020c72616c40671132cbe4a47c799f4a3c50ec13a3ff9467
MD5 28e177cbf692a446a956c37d86dd72ba
BLAKE2b-256 7f6005f92f4f6ea8aee55d3e26b139062bbb775d0606a0f5bb0ee375261cd5a1

See more details on using hashes here.

Provenance

The following attestation bundles were made for visibe-0.1.11.tar.gz:

Publisher: release.yml on Project140/visibe-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file visibe-0.1.11-py3-none-any.whl.

File metadata

  • Download URL: visibe-0.1.11-py3-none-any.whl
  • Upload date:
  • Size: 81.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for visibe-0.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 6b2f422f4e83473dffc7e3db02e9080f97853a6f166b5b4611574ed496f410fa
MD5 4a243cd7af686f39e419b45d7175fa19
BLAKE2b-256 f164baaadb7d1b33241fb375f74659eef43c22cd70020776cd7a22f0af306943

See more details on using hashes here.

Provenance

The following attestation bundles were made for visibe-0.1.11-py3-none-any.whl:

Publisher: release.yml on Project140/visibe-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page