AI Agent Observability Platform - Track CrewAI, LangChain, LangGraph, and more
Project description
Visibe SDK for Python
Observability for AI agents. Track costs, performance, and errors across your entire AI stack — whether you're using CrewAI, LangChain, LangGraph, AutoGen, or direct OpenAI calls.
📦 Getting Started
Installation
pip install visibe
Basic Configuration
Set your API key in a .env file:
VISIBE_API_KEY=sk_live_your_api_key_here
Then initialize the SDK — one line instruments everything:
import visibe
visibe.init()
That's it. Every OpenAI, LangChain, LangGraph, CrewAI, and AutoGen client created after this call is automatically traced.
Quick Usage Example
import visibe
from openai import OpenAI
visibe.init()
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
# This call is automatically traced — cost, tokens, duration, and content are captured.
🧩 Integrations
Visibe integrates with the most popular AI/agent frameworks in Python. Every integration supports three levels of control:
| Framework | visibe.init() |
obs.instrument() |
obs.track() / manual |
|---|---|---|---|
| OpenAI | ✅ | ✅ | ✅ |
| LangChain | ✅ | ✅ | ✅ |
| LangGraph | ✅ | ✅ | ✅ |
| CrewAI | ✅ | ✅ | ✅ |
| AutoGen | ✅ | ✅ | ✅ |
| AWS Bedrock | ✅ | ✅ | ✅ |
Also works with OpenAI-compatible providers: Azure OpenAI, Groq, Together.ai, DeepSeek, and others.
OpenAI
from visibe import Visibe
from openai import OpenAI
obs = Visibe()
client = OpenAI()
obs.instrument(client)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
Group multiple calls into one trace:
with obs.track(client, name="my-conversation"):
r1 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
r2 = client.chat.completions.create(model="gpt-4o-mini", messages=[...])
# ^ Both calls sent as one grouped trace
Works with chat completions and Responses API, streaming, tool calls, sync and async clients.
LangChain / LangGraph
from visibe import Visibe
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
obs = Visibe()
llm = ChatOpenAI(model="gpt-4o-mini")
graph = create_react_agent(llm, tools)
obs.instrument(graph, name="my-agent")
result = graph.invoke({"messages": [("user", "Your prompt here")]})
Dynamic pipe chains (prompt | llm | parser) are also automatically instrumented when using visibe.init(). Nested sub-graphs are tracked with hierarchical agent names.
CrewAI
from visibe import Visibe
from crewai import Agent, Task, Crew
obs = Visibe()
architect = Agent(role="Plot Architect", goal="Design mystery plots", backstory="...")
designer = Agent(role="Character Designer", goal="Create characters", backstory="...")
narrator = Agent(role="Narrator", goal="Write the story", backstory="...")
task1 = Task(description="Create a plot outline", agent=architect, expected_output="...")
task2 = Task(description="Design characters", agent=designer, expected_output="...", context=[task1])
task3 = Task(description="Write the story", agent=narrator, expected_output="...", context=[task1, task2])
crew = Crew(agents=[architect, designer, narrator], tasks=[task1, task2, task3])
obs.instrument(crew, name="mystery-writer")
result = crew.kickoff()
# ^ Single trace with all agents, LLM calls, and per-task cost breakdown
With visibe.init(), trace names are auto-derived from agent roles (e.g. "Plot Architect, Character Designer, Narrator"). Training and testing runs (crew.train(), crew.test()) are traced too.
AutoGen
from visibe import Visibe
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
obs = Visibe()
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
obs.instrument(model_client, name="my-conversation")
assistant = AssistantAgent("assistant", model_client=model_client)
result = await assistant.run(task="Help me with this task")
AWS Bedrock
from visibe import Visibe
import boto3
obs = Visibe()
bedrock = boto3.client("bedrock-runtime", region_name="us-east-1")
obs.instrument(bedrock)
response = bedrock.converse(
modelId="anthropic.claude-3-haiku-20240307-v1:0",
messages=[{"role": "user", "content": [{"text": "Hello!"}]}]
)
Group multiple calls into one trace:
with obs.track(bedrock, name="my-workflow"):
r1 = bedrock.converse(modelId="anthropic.claude-3-haiku-20240307-v1:0", messages=[...])
r2 = bedrock.converse(modelId="amazon.nova-lite-v1:0", messages=[...])
# ^ Both calls sent as one grouped trace
Supports all Bedrock API methods: converse, converse_stream, invoke_model, and invoke_model_with_response_stream. Works with all models available via Bedrock — Claude, Nova, Llama, Mistral, and more.
⚙️ Configuration
from visibe import Visibe
# API key from environment (recommended)
obs = Visibe()
# Or pass directly
obs = Visibe(api_key="sk_live_abc123")
# Group traces by session
obs = Visibe(session_id="user-session-123")
Environment Variables
| Variable | Description | Default |
|---|---|---|
VISIBE_API_KEY |
Your API key (required) | — |
VISIBE_API_URL |
Override API endpoint | https://api.visibe.ai |
VISIBE_AUTO_INSTRUMENT |
Comma-separated frameworks to auto-instrument | All detected |
VISIBE_CONTENT_LIMIT |
Max chars for LLM/tool content in spans | 1000 |
VISIBE_DEBUG |
Enable debug logging (1 to enable) |
0 |
📊 What Gets Tracked
| Metric | Description |
|---|---|
| Cost | Total spend + per-agent and per-task cost breakdown |
| Tokens | Input/output tokens per LLM call |
| Duration | Total time + time per step |
| Tools | Which tools were used, duration, success/failure |
| Errors | When and where things failed |
| Spans | Full execution timeline with LLM calls, tool calls, and agent events |
📚 Documentation
For advanced usage, detailed integration guides, and API reference, check out the full documentation:
- OpenAI integration
- LangChain integration
- CrewAI integration
- AutoGen integration
- AWS Bedrock integration
🔗 Resources
- PyPI Package — Install the latest version
- Visibe Dashboard — View your traces and analytics
📃 License
MIT — see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file visibe-0.1.3.tar.gz.
File metadata
- Download URL: visibe-0.1.3.tar.gz
- Upload date:
- Size: 80.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
703c47fa9847c4edacb145e9b323a206cddb5183889507c7ec6f7f2494d57b4e
|
|
| MD5 |
a9aa2fbcfd5de2fe4c3e08b112c01267
|
|
| BLAKE2b-256 |
b15015bb020c30873d9b84cd269a420721af1f452a88589de08ba70bfea094d5
|
Provenance
The following attestation bundles were made for visibe-0.1.3.tar.gz:
Publisher:
release.yml on Project140/visibe-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
visibe-0.1.3.tar.gz -
Subject digest:
703c47fa9847c4edacb145e9b323a206cddb5183889507c7ec6f7f2494d57b4e - Sigstore transparency entry: 969733468
- Sigstore integration time:
-
Permalink:
Project140/visibe-python@96386a5f3cb474c7dd915f63d190a58f5baf908c -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Project140
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@96386a5f3cb474c7dd915f63d190a58f5baf908c -
Trigger Event:
push
-
Statement type:
File details
Details for the file visibe-0.1.3-py3-none-any.whl.
File metadata
- Download URL: visibe-0.1.3-py3-none-any.whl
- Upload date:
- Size: 68.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44c66e98cc5e1e4d0d85050d7cca44d44ca686f08d2df1d87bef2f0576a4d055
|
|
| MD5 |
ad980647ed4568d1739118e39d672145
|
|
| BLAKE2b-256 |
b3df304adb3965f0427d8471a4a75679bdd8b22e29cbe0702abef3d0d70732bb
|
Provenance
The following attestation bundles were made for visibe-0.1.3-py3-none-any.whl:
Publisher:
release.yml on Project140/visibe-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
visibe-0.1.3-py3-none-any.whl -
Subject digest:
44c66e98cc5e1e4d0d85050d7cca44d44ca686f08d2df1d87bef2f0576a4d055 - Sigstore transparency entry: 969733472
- Sigstore integration time:
-
Permalink:
Project140/visibe-python@96386a5f3cb474c7dd915f63d190a58f5baf908c -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Project140
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@96386a5f3cb474c7dd915f63d190a58f5baf908c -
Trigger Event:
push
-
Statement type: