Verse Python SDK
Project description
Verse Python SDK
A Python SDK for observability and tracing in AI applications. Supports decorators and context managers with automatic instrumentation for popular LLM frameworks.
Installation
pip install verse-sdk
Quick Start
from verse_sdk import verse, observe
# Initialize
verse.init(
app_name="my-app",
exporters=[verse.exporters.console()],
vendors=["pydantic_ai"] # Optional: auto-instrument LLM calls
)
# Option 1: Decorators (recommended)
@observe()
async def my_function(query: str):
result = await process_query(query)
return result
# Option 2: Context managers
async def my_function_v2(query: str):
with verse.trace("my_function") as trace:
trace.input(query)
with verse.span("process_query") as span:
result = await process_query(query)
span.output(result)
trace.output(result)
return result
Table of Contents
- Initialization
- Exporters
- Decorators
- Context Managers
- Context Methods
- Integrations
- Examples
- API Reference
Initialization
verse.init(
app_name="my-app", # Required: identifies your project
environment="production", # Optional: environment label
exporters=[...], # Required: list of exporters
vendors=["pydantic_ai"], # Optional: enables auto-instrumentation
version="1.0.0" # Optional: app version
)
Exporters
Console
verse.exporters.console()
verse.exporters.console({"scopes": ["agent-workflow-1"]}) # With scope filtering
Langfuse
from verse_sdk import LangfuseConfig
verse.exporters.langfuse(
LangfuseConfig(
host="https://cloud.langfuse.com",
public_key="pk-...",
private_key="sk-..."
)
)
# Or use environment variables: LANGFUSE_HOST, LANGFUSE_PUBLIC_KEY, LANGFUSE_PRIVATE_KEY
verse.exporters.langfuse()
OTEL
from verse_sdk import OtelConfig
verse.exporters.otel(
OtelConfig(host="http://localhost:4318")
)
Verse
from verse_sdk import VerseConfig
verse.exporters.verse(
VerseConfig(
api_key="your-api-key",
host="http://localhost:4318",
project_id="your-project-id"
)
)
# Or use environment variables: VERSE_API_KEY, VERSE_HOST, VERSE_PROJECT_ID
verse.exporters.verse()
Scope Filtering
Route traces to specific exporters by scope:
# Configure exporters with scopes
verse.init(
app_name="my-app",
exporters=[
verse.exporters.console({"scopes": ["agent-a"]}),
verse.exporters.langfuse({"scopes": ["agent-b"]})
]
)
# Set scope on traces
@observe(type="trace", scope="agent-a")
def my_function():
pass
Decorators
Decorators automatically capture inputs, outputs, and errors.
Available Decorators
from verse_sdk import observe
Basic Usage
@observe(type="trace")
async def answer_question(question: str):
context = await retrieve_context(question)
return await generate_answer(question, context)
@observe(type="span")
async def retrieve_context(question: str):
return await db.search(question)
@observe(type="generation")
async def generate_answer(question: str, context: str):
return await llm.complete(f"Context: {context}\nQ: {question}")
@observe(type="tool")
def search_database(query: str):
return db.search(query)
Customization
# Custom name
@observe(name="custom_name")
def my_function():
pass
# Disable input/output capture
@observe(capture_input=False, capture_output=False)
def sensitive_llm_call():
pass
# Add custom attributes
@observe(level="debug", custom_attr="value")
def detailed_operation():
pass
Accessing Current Context
from verse_sdk import get_current_trace_context, get_current_span_context
@observe()
def workflow():
trace_ctx = get_current_trace_context()
trace_ctx.user("user-123").session("session-456")
return process_data()
@observe()
def process_data():
span_ctx = get_current_span_context()
span_ctx.level("info").metadata({"step": "processing"})
return "result"
Context Managers
Context managers provide fine-grained control over when attributes are set.
Basic Usage
def process_request(user_id: str, query: str):
with verse.trace("process_request") as trace:
trace.input({"user_id": user_id, "query": query})
trace.session(user_id).user(user_id)
with verse.span("validate") as span:
span.input(query).level("debug")
is_valid = validate(query)
span.output(is_valid)
if is_valid:
with verse.generation("llm_call") as gen:
gen.model("gpt-4").vendor("openai").input(query)
response = llm.complete(query)
gen.output(response).usage({
"input_tokens": 150,
"output_tokens": 50,
"total_tokens": 200
})
trace.output(response)
return response
Grouped Context Managers (Python 3.10+)
with (
verse.trace("my_trace", session_id="user-123") as trace,
verse.span("my_span", level="info") as span,
):
result = process()
span.output(result)
trace.output(result)
Setting Attributes
# Option 1: During initialization
with verse.trace(name="my_trace", session_id="user-123", scope="agent-a"):
pass
# Option 2: After initialization (chainable)
with verse.trace("my_trace") as trace:
trace.session("user-123").scope("agent-a").tags(["production"])
Context Methods
Common Methods (All Contexts)
.input(data)- Set input.output(data)- Set output.metadata(dict)- Add metadata.error(exception)- Record error.score(dict)- Add evaluation score.event(name, level, **attrs)- Add event.set_attributes(**kwargs)- Set custom attributes
TraceContext
.session(session_id)- Set session ID.user(user_id)- Set user ID.scope(scope)- Set scope for filtering.tags(list)- Add tags
SpanContext
.level(level)- Set log level ("info", "debug", "warning").operation(op)- Set operation type ("tool", "db.query").status_message(message)- Set status message
GenerationContext
Inherits SpanContext methods, plus:
.model(model_name)- Set model identifier.vendor(vendor)- Set model vendor.usage(dict)- Set token usage ({"input_tokens": 150, "output_tokens": 50, "total_tokens": 200}).messages(list)- Set message history
Integrations
Enable auto-instrumentation by setting the vendors parameter:
Pydantic AI
verse.init(app_name="my-app", vendors=["pydantic_ai"], exporters=[...])
agent = Agent("openai:gpt-4")
result = await agent.run("query") # Automatically traced
# Streaming is also supported
async for event in agent.run_stream("query"): # Automatically traced
print(event)
Supported:
- ✅
Agent.run()- Synchronous and asynchronous runs are automatically traced - ✅
Agent.run_stream()- Streaming runs are automatically traced with delta events
LiteLLM
verse.init(app_name="my-app", vendors=["litellm"], exporters=[...])
from litellm import completion
response = completion(model="gpt-4", messages=[...]) # Automatically traced
Supported:
- ✅
completion()- Synchronous and asynchronous completions are automatically traced - ✅ Streaming completions - Stream events are captured and traced
- ✅ Embeddings - Embedding operations are automatically traced
LangChain
verse.init(app_name="my-app", vendors=["langchain"], exporters=[...])
# Your LangChain code is automatically traced
Supported:
- ✅ Chat models -
ChatModelinvocations (synchronous and asynchronous) are automatically traced - ✅ LLM completions -
LLMinvocations (synchronous and asynchronous) are automatically traced - ✅ Streaming - Streaming token events are captured via
on_llm_new_token - ✅ Embeddings - Embedding operations are automatically traced
Anthropic
verse.init(app_name="my-app", vendors=["anthropic"], exporters=[...])
from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(model="claude-3-5-sonnet-20241022", messages=[...]) # Automatically traced
Supported:
- ✅
Messages.create()- Synchronous message creation is automatically traced - ✅
AsyncMessages.create()- Asynchronous message creation is automatically traced - ✅ Streaming - Streaming responses are automatically traced with delta events
Google (Gemini)
verse.init(app_name="my-app", vendors=["google"], exporters=[...])
from google.genai import Client
client = Client()
response = client.models.generate_content(model="gemini-pro", contents=[...]) # Automatically traced
Supported:
- ✅
Models.generate_content()- Synchronous content generation is automatically traced - ✅
Models.generate_content_stream()- Streaming content generation is automatically traced
Agent Fixtures and Examples
The repository includes complete working examples of agents using each supported integration. These fixtures demonstrate best practices for setting up tools, handling streaming, and managing complex agent workflows.
Available Agent Examples
- Tool calling with function execution
- Streaming responses with tool support
- Multi-turn conversations with tool results
- Function calling with Google GenAI
- Streaming content generation
- Tool execution and response handling
- ChatOpenAI with tool binding
- Streaming with tool calls
- Agent executor patterns
LiteLLM Agent and LiteLLM Utilities
- Complete tool calling setup with LiteLLM
- Function schema generation
- Async completion with tools
- Tool execution loop implementation
- Agent with tool definitions
- Streaming with Pydantic AI
- Type-safe tool calling
Using the Examples
These fixtures are fully functional and can be used as templates for your own agents:
# Example: Using the LiteLLM agent fixture
from tests.fixtures.agents.litellm_agent import LiteLLMAgent
agent = LiteLLMAgent()
response = await agent.ask("What's the weather in San Francisco?")
Each agent fixture includes:
- Complete setup and initialization
- Tool/function definitions
- Streaming support
- Error handling
- Integration with Verse SDK tracing
API Reference
Decorators
@observe(name=None, type=str capture_input=True, capture_output=True, capture_metadata=True, observation_type=str, **attrs)
Creates an observation.
Context Managers
verse.trace(name, session_id=None, user_id=None, scope=None, tags=None, metadata=None, **attrs)
Returns TraceContext
verse.span(name, input=None, output=None, level=None, op=None, status_message=None, metadata=None, **attrs)
Returns SpanContext
verse.generation(name, model=None, vendor=None, input=None, output=None, messages=None, usage=None, **attrs)
Returns GenerationContext
Validation
OtelGenAISpecCheck(raw_span)
A specification checker that validates spans against the OpenTelemetry GenAI semantic conventions. This tool helps ensure your LLM tracing data is healthy, compliant, and complete.
Purpose
The OtelGenAISpecCheck class:
- Validates required attributes - Identifies missing required fields that could break downstream processing
- Detects deprecated attributes - Flags attributes that should be migrated to newer conventions
- Checks attribute values - Ensures values conform to allowed enums and formats
- Highlights recommendations - Surfaces optional attributes that improve trace quality
- Filters by context - Only validates attributes relevant to the specific provider and operation
Basic Usage
from verse_sdk.spec import OtelGenAISpecCheck
# Validate a span
checker = OtelGenAISpecCheck(span_data)
# Check for issues
if checker.has_errors:
print("Span has validation errors")
if checker.could_improve:
print("Span is missing recommended attributes")
if checker.has_deprecations:
print("Span uses deprecated attributes")
Validation Methods
Checking Overall Status:
has_errors- ReturnsTrueif span has required attributes missing or invalid valuescould_improve- ReturnsTrueif span is missing recommended (but not required) attributeshas_deprecations- ReturnsTrueif span uses deprecated attributes
Checking Specific Attributes:
is_attribute_valid(name)- Check if a specific attribute is validis_attribute_invalid(name)- Check if a specific attribute has errorsis_attribute_missing(name)- Check if a specific attribute is missingis_attribute_deprecated(name)- Check if a specific attribute is deprecated
Extracting Data:
extract_metadata()- Get structured metadata (model, session, trace IDs, etc.)extract_session_id()- Get session ID from standard or legacy attribute namesget_attribute_value(name, default)- Get any span attribute valueget_span_context_value(name, default)- Get span context fields (span_id, trace_id)get_span_root_value(name, default)- Get root-level span fields (start_time, end_time)
Example: Validating Exported Spans
from verse_sdk import verse
from verse_sdk.spec import OtelGenAISpecCheck
# Custom exporter that validates spans
class ValidatingExporter:
def export(self, spans):
for span in spans:
checker = OtelGenAISpecCheck(span)
if checker.has_errors:
print(f"ERROR: Span {checker.extract_metadata().span_id} has validation errors")
if checker.could_improve:
print(f"WARNING: Span could be improved with recommended attributes")
if checker.is_attribute_deprecated("gen_ai.system"):
print("INFO: Migrate 'gen_ai.system' to 'gen_ai.provider.name'")
verse.init(
app_name="my-app",
exporters=[ValidatingExporter()]
)
Example: Extracting Metadata
from verse_sdk.spec import OtelGenAISpecCheck
checker = OtelGenAISpecCheck(span_data)
metadata = checker.extract_metadata()
print(f"Model: {metadata.model}")
print(f"Provider: {metadata.model_provider}")
print(f"Session: {metadata.session_id}")
print(f"Trace: {metadata.trace_id}")
print(f"Environment: {metadata.environment}")
Helper Functions
get_current_trace_context() -> TraceContext
Get the current trace context from active span. Raises ValueError if unavailable.
get_current_span_context() -> SpanContext
Get the current span context from active span. Raises ValueError if unavailable.
get_current_generation_context() -> GenerationContext
Get the current generation context from active span. Raises ValueError if unavailable.
Data Formats
Usage:
{"input_tokens": 150, "output_tokens": 50, "total_tokens": 200}
Score:
{"name": "quality", "value": 0.95, "comment": "Excellent"}
Best Practices
- Use decorators by default - Cleaner code with automatic input/output capture
- Use context managers for fine-grained control - When you need dynamic names or conditional logic
- Combine both approaches - Decorators for functions, context managers within functions
- Choose appropriate observation types:
- Trace: Top-level workflows
- Span: Sub-operations, processing steps
- Generation: LLM API calls
- Tool: Function/tool calls in agent systems
- Enable auto-instrumentation - Set
vendorsparameter for supported frameworks - Use scope filtering - Route traces to different exporters by scope
Shutdown
verse.shutdown() # Flush all traces before exit
Support
For issues and questions, please open an issue on GitHub.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file verse_sdk-0.3.0b3.tar.gz.
File metadata
- Download URL: verse_sdk-0.3.0b3.tar.gz
- Upload date:
- Size: 81.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c31f468f826819e17522a691189b20abb01c350b01ad661f3e5404fb06d2419e
|
|
| MD5 |
c97442fa7d834a9e8a66ef6d24df88c5
|
|
| BLAKE2b-256 |
979ff81ad595793a35d80b6579825b18b874bb1d5dc3c0f8dc9730c29675556b
|
File details
Details for the file verse_sdk-0.3.0b3-py3-none-any.whl.
File metadata
- Download URL: verse_sdk-0.3.0b3-py3-none-any.whl
- Upload date:
- Size: 66.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d8c62f728911c2d11a7e04bb264fa1830920e2fc1dc019c550a82fbdce9e6836
|
|
| MD5 |
e364fddad7294299b00953e35e914ece
|
|
| BLAKE2b-256 |
54e4f68ff7c8254f36bbd98e5316335b756ecb289852815771d141e6364d51bb
|