Skip to main content

AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more

Project description

Visibe SDK for Python

Visibe Analytics

Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, or direct OpenAI calls.

PyPI version Python


📦 Getting Started

1. Create an account

Sign up at app.visibe.ai and create a project.

2. Get an API key

In your project, go to Settings → API Keys and generate a new key. It will look like sk_live_....

3. Install the SDK

pip install visibe

4. Set your API key

export VISIBE_API_KEY=sk_live_your_api_key_here

Or pass it directly in code:

visibe.init(api_key="sk_live_your_api_key_here")

Or load it from a .env file using python-dotenv:

pip install python-dotenv
from dotenv import load_dotenv
load_dotenv()  # loads VISIBE_API_KEY from .env

import visibe
visibe.init()

5. Instrument your app

import visibe

visibe.init()

That's it. Every OpenAI, LangChain, LangGraph, CrewAI, AutoGen, and Bedrock client created after this call is automatically traced — no other code changes needed.


🧩 Integrations

Framework Auto (visibe.init()) Manual
OpenAI
LangChain
LangGraph
CrewAI
AutoGen
AWS Bedrock

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

OpenAI

import visibe
from openai import OpenAI

visibe.init()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
# Automatically traced — cost, tokens, duration, and content captured.

LangChain / LangGraph

import visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

visibe.init()

llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)

result = graph.invoke({"messages": [("user", "Your prompt here")]})
# Automatically traced — agent steps, LLM calls, and tool calls captured.

Dynamic pipe chains (prompt | llm | parser) are also automatically traced. Nested sub-graphs are tracked with hierarchical agent names.

CrewAI

import visibe
from crewai import Agent, Task, Crew

visibe.init()

architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")

task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])

crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])
result = crew.kickoff()
# Single trace with all agents, LLM calls, and per-task cost breakdown.

Training and testing runs (crew.train(), crew.test()) are traced too.

AutoGen

import visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent

visibe.init()

model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")
# Automatically traced.

AWS Bedrock

import visibe
import boto3

visibe.init()

bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
response = bedrock.converse(
    modelId="anthropic.claude-3-haiku-20240307-v1:0",
    messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)
# Automatically traced.

Supports converse, converse_stream, invoke_model, and invoke_model_with_response_stream. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.


⚙️ Configuration

import visibe

visibe.init(
    api_key="sk_live_abc123",       # or set VISIBE_API_KEY env var
    frameworks=["openai", "langgraph"],  # limit to specific frameworks
    content_limit=500,              # max chars for LLM content in traces
    debug=True,                     # enable debug logging
)

Environment Variables

Variable Description Default
VISIBE_API_KEY Your API key (required)
VISIBE_API_URL Override API endpoint https://api.visibe.ai
VISIBE_AUTO_INSTRUMENT Comma-separated frameworks to auto-instrument All detected
VISIBE_CONTENT_LIMIT Max chars for LLM/tool content in spans 1000
VISIBE_DEBUG Enable debug logging (1 to enable) 0

📊 What Gets Tracked

Metric Description
Cost Total spend + per-agent and per-task cost breakdown
Tokens Input/output tokens per LLM call
Duration Total time + time per step
Tools Which tools were used, duration, success/failure
Errors When and where things failed
Spans Full execution timeline with LLM calls, tool calls, and agent events

🔧 Manual Instrumentation

For cases where you need explicit control — instrumenting a specific client, grouping calls into a named trace, or using Visibe without init().

Instrument a specific client

from visibe import Visibe

tracer = Visibe(api_key="sk_live_abc123")
tracer.instrument(graph, name="my-agent")

result = graph.invoke({"messages": [("user", "Hello")]})

Group multiple calls into one trace

from visibe import Visibe

tracer = Visibe()

with tracer.track(client, name="my-conversation"):
    r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
    r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# Both calls sent as one grouped trace.

Remove instrumentation

tracer.uninstrument(client)

# Or use as a context manager for automatic cleanup:
with tracer.instrument(graph, name="my-agent"):
    graph.invoke(...)
# Instrumentation removed automatically on exit.

📚 Documentation


🔗 Resources


📃 License

MIT — see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

visibe-0.1.7.tar.gz (83.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

visibe-0.1.7-py3-none-any.whl (70.0 kB view details)

Uploaded Python 3

File details

Details for the file visibe-0.1.7.tar.gz.

File metadata

  • Download URL: visibe-0.1.7.tar.gz
  • Upload date:
  • Size: 83.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for visibe-0.1.7.tar.gz
Algorithm Hash digest
SHA256 e11a3f97191546379ac30865cd6d562e6e056db154832dd240c861e5b54153eb
MD5 fee3e5a5d965f76a7f8c08cb6e1653d0
BLAKE2b-256 3e2ec8eadd58d4d2a551a1b9cc4835587aacc719158103c59dedba7e5e012fd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for visibe-0.1.7.tar.gz:

Publisher: release.yml on Project140/visibe-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file visibe-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: visibe-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 70.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for visibe-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 18e52346deaf2d97a13c84f436d6c6d70966e9880684927dd55f0b7135d42f4a
MD5 5b9f1e0b0598865e55f97ad5cf251ef2
BLAKE2b-256 345d2c768a81fe4eb9301fc4db6fb13738d55b5ee32a1dd11ef24c76a5023a6b

See more details on using hashes here.

Provenance

The following attestation bundles were made for visibe-0.1.7-py3-none-any.whl:

Publisher: release.yml on Project140/visibe-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page