Skip to main content

AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more

Project description

Visibe SDK for Python

Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, or direct OpenAI calls.

PyPI version Python


📦 Getting Started

Installation

pip install visibe

Install with the extras you need:

pip install "visibe[crewai]"      # CrewAI
pip install "visibe[openai]"      # OpenAI
pip install "visibe[langchain]"   # LangChain
pip install "visibe[langgraph]"   # LangGraph
pip install "visibe[autogen]"     # AutoGen
pip install "visibe[all]"         # Everything

Basic Configuration

Set your API key in a .env file:

VISIBE_API_KEY=sk_live_your_api_key_here

Then initialize the SDK — one line instruments everything:

import visibe

visibe.init()

That's it. Every OpenAI, LangChain, LangGraph, CrewAI, and AutoGen client created after this call is automatically traced.

Quick Usage Example

import visibe
from openai import OpenAI

visibe.init()

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
# This call is automatically traced — cost, tokens, duration, and content are captured.

🧩 Integrations

Visibe integrates with the most popular AI/agent frameworks in Python. Every integration supports three levels of control:

Framework visibe.init() obs.instrument() obs.track() / manual
OpenAI
LangChain
LangGraph
CrewAI
AutoGen
AWS Bedrock

Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.

OpenAI

from visibe import Visibe
from openai import OpenAI

obs = Visibe()
client = OpenAI()

obs.instrument(client)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)

Group multiple calls into one trace:

with obs.track(client, name="my-conversation"):
    r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
    r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# ^ Both calls sent as one grouped trace

Works with chat completions and Responses API, streaming, tool calls, sync and async clients.

LangChain / LangGraph

from visibe import Visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

obs = Visibe()
llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)

obs.instrument(graph, name="my-agent")

result = graph.invoke({"messages": [("user", "Your prompt here")]})

Dynamic pipe chains (prompt | llm | parser) are also automatically instrumented when using visibe.init(). Nested sub-graphs are tracked with hierarchical agent names.

CrewAI

from visibe import Visibe
from crewai import Agent, Task, Crew

obs = Visibe()

architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")

task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])

crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])

obs.instrument(crew, name="mystery-writer")
result = crew.kickoff()
# ^ Single trace with all agents, LLM calls, and per-task cost breakdown

With visibe.init(), trace names are auto-derived from agent roles (e.g. "Plot Architect, Character Designer, Narrator"). Training and testing runs (crew.train(), crew.test()) are traced too.

AutoGen

from visibe import Visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent

obs = Visibe()
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")

obs.instrument(model_client, name="my-conversation")

assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")

AWS Bedrock

from visibe import Visibe
import boto3

obs = Visibe()
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")

obs.instrument(bedrock)

response = bedrock.converse(
    modelId="anthropic.claude-3-haiku-20240307-v1:0",
    messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)

Group multiple calls into one trace:

with obs.track(bedrock, name="my-workflow"):
    r1 = bedrock.converse(modelId="anthropic.claude-3-haiku-20240307-v1:0", messages=[...])
    r2 = bedrock.converse(modelId="amazon.nova-lite-v1:0", messages=[...])
# ^ Both calls sent as one grouped trace

Supports all Bedrock API methods: converse, converse_stream, invoke_model, and invoke_model_with_response_stream. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.


⚙️ Configuration

from visibe import Visibe

# API key from environment (recommended)
obs = Visibe()

# Or pass directly
obs = Visibe(api_key="sk_live_abc123")

# Group traces by session
obs = Visibe(session_id="user-session-123")

Environment Variables

Variable Description Default
VISIBE_API_KEY Your API key (required)
VISIBE_API_URL Override API endpoint https://api.visibe.ai
VISIBE_AUTO_INSTRUMENT Comma-separated frameworks to auto-instrument All detected
VISIBE_CONTENT_LIMIT Max chars for LLM/tool content in spans 1000
VISIBE_DEBUG Enable debug logging (1 to enable) 0

📊 What Gets Tracked

Metric Description
Cost Total spend + per-agent and per-task cost breakdown
Tokens Input/output tokens per LLM call
Duration Total time + time per step
Tools Which tools were used, duration, success/failure
Errors When and where things failed
Spans Full execution timeline with LLM calls, tool calls, and agent events

📚 Documentation

For advanced usage, detailed integration guides, and API reference, check out the full documentation:


🔗 Resources


📃 License

MIT — see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

visibe-0.1.1.tar.gz (79.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

visibe-0.1.1-py3-none-any.whl (69.0 kB view details)

Uploaded Python 3

File details

Details for the file visibe-0.1.1.tar.gz.

File metadata

  • Download URL: visibe-0.1.1.tar.gz
  • Upload date:
  • Size: 79.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for visibe-0.1.1.tar.gz
Algorithm Hash digest
SHA256 4a1e31187a2ec8ee622ec3954bb2bfcf923c08b283fcefa8206f794429b25e57
MD5 798faee4307df1e76d50fabd05ed6cee
BLAKE2b-256 90e70cb39d8decc01d914bff0d2edb0c8e6ef314d9dd4ef0a8cf3c79909f67ae

See more details on using hashes here.

Provenance

The following attestation bundles were made for visibe-0.1.1.tar.gz:

Publisher: release.yml on Project140/visibe-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file visibe-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: visibe-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 69.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for visibe-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7200768273c34301bad58835a96202147116fcd85b2400dcc5d7668fb47c928d
MD5 18ca08e6729f526fccd0c54e7d00cbfb
BLAKE2b-256 fd9299622e7119eb0236082f80b1e9107301cfe8907b58c95291277aa343c295

See more details on using hashes here.

Provenance

The following attestation bundles were made for visibe-0.1.1-py3-none-any.whl:

Publisher: release.yml on Project140/visibe-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page