Skip to main content

Verse Python SDK

Project description

Verse Python SDK

A Python SDK for observability and tracing in AI applications. Supports decorators and context managers with automatic instrumentation for popular LLM frameworks.

Installation

pip install verse-sdk

Quick Start

from verse_sdk import verse, observe

# Initialize
verse.init(
    app_name="my-app",
    exporters=[verse.exporters.console()],
    vendors=["pydantic_ai"]  # Optional: auto-instrument LLM calls
)

# Option 1: Decorators (recommended)
@observe()
async def my_function(query: str):
    result = await process_query(query)
    return result

# Option 2: Context managers
async def my_function_v2(query: str):
    with verse.trace("my_function") as trace:
        trace.input(query)
        with verse.span("process_query") as span:
            result = await process_query(query)
            span.output(result)
        trace.output(result)
    return result

Table of Contents

Initialization

verse.init(
    app_name="my-app",           # Required: identifies your project
    environment="production",     # Optional: environment label
    exporters=[...],             # Required: list of exporters
    vendors=["pydantic_ai"],     # Optional: enables auto-instrumentation
    version="1.0.0"              # Optional: app version
)

Exporters

Console

verse.exporters.console()
verse.exporters.console({"scopes": ["agent-workflow-1"]})  # With scope filtering

Langfuse

from verse_sdk import LangfuseConfig

verse.exporters.langfuse(
    LangfuseConfig(
        host="https://cloud.langfuse.com",
        public_key="pk-...",
        private_key="sk-..."
    )
)

# Or use environment variables: LANGFUSE_HOST, LANGFUSE_PUBLIC_KEY, LANGFUSE_PRIVATE_KEY
verse.exporters.langfuse()

OTEL

from verse_sdk import OtelConfig

verse.exporters.otel(
    OtelConfig(host="http://localhost:4318")
)

Verse

from verse_sdk import VerseConfig

verse.exporters.verse(
    VerseConfig(
        api_key="your-api-key",
        host="http://localhost:4318",
        project_id="your-project-id"
    )
)

# Or use environment variables: VERSE_API_KEY, VERSE_HOST, VERSE_PROJECT_ID
verse.exporters.verse()

Scope Filtering

Route traces to specific exporters by scope:

# Configure exporters with scopes
verse.init(
    app_name="my-app",
    exporters=[
        verse.exporters.console({"scopes": ["agent-a"]}),
        verse.exporters.langfuse({"scopes": ["agent-b"]})
    ]
)

# Set scope on traces
@observe(type="trace", scope="agent-a")
def my_function():
    pass

Decorators

Decorators automatically capture inputs, outputs, and errors.

Available Decorators

from verse_sdk import observe

Basic Usage

@observe(type="trace")
async def answer_question(question: str):
    context = await retrieve_context(question)
    return await generate_answer(question, context)

@observe(type="span")
async def retrieve_context(question: str):
    return await db.search(question)

@observe(type="generation")
async def generate_answer(question: str, context: str):
    return await llm.complete(f"Context: {context}\nQ: {question}")

@observe(type="tool")
def search_database(query: str):
    return db.search(query)

Customization

# Custom name
@observe(name="custom_name")
def my_function():
    pass

# Disable input/output capture
@observe(capture_input=False, capture_output=False)
def sensitive_llm_call():
    pass

# Add custom attributes
@observe(level="debug", custom_attr="value")
def detailed_operation():
    pass

Accessing Current Context

from verse_sdk import get_current_trace_context, get_current_span_context

@observe()
def workflow():
    trace_ctx = get_current_trace_context()
    trace_ctx.user("user-123").session("session-456")
    return process_data()

@observe()
def process_data():
    span_ctx = get_current_span_context()
    span_ctx.level("info").metadata({"step": "processing"})
    return "result"

Context Managers

Context managers provide fine-grained control over when attributes are set.

Basic Usage

def process_request(user_id: str, query: str):
    with verse.trace("process_request") as trace:
        trace.input({"user_id": user_id, "query": query})
        trace.session(user_id).user(user_id)

        with verse.span("validate") as span:
            span.input(query).level("debug")
            is_valid = validate(query)
            span.output(is_valid)

        if is_valid:
            with verse.generation("llm_call") as gen:
                gen.model("gpt-4").vendor("openai").input(query)
                response = llm.complete(query)
                gen.output(response).usage({
                    "input_tokens": 150,
                    "output_tokens": 50,
                    "total_tokens": 200
                })

        trace.output(response)
        return response

Grouped Context Managers (Python 3.10+)

with (
    verse.trace("my_trace", session_id="user-123") as trace,
    verse.span("my_span", level="info") as span,
):
    result = process()
    span.output(result)
    trace.output(result)

Setting Attributes

# Option 1: During initialization
with verse.trace(name="my_trace", session_id="user-123", scope="agent-a"):
    pass

# Option 2: After initialization (chainable)
with verse.trace("my_trace") as trace:
    trace.session("user-123").scope("agent-a").tags(["production"])

Context Methods

Common Methods (All Contexts)

  • .input(data) - Set input
  • .output(data) - Set output
  • .metadata(dict) - Add metadata
  • .error(exception) - Record error
  • .score(dict) - Add evaluation score
  • .event(name, level, **attrs) - Add event
  • .set_attributes(**kwargs) - Set custom attributes

TraceContext

  • .session(session_id) - Set session ID
  • .user(user_id) - Set user ID
  • .scope(scope) - Set scope for filtering
  • .tags(list) - Add tags

SpanContext

  • .level(level) - Set log level ("info", "debug", "warning")
  • .operation(op) - Set operation type ("tool", "db.query")
  • .status_message(message) - Set status message

GenerationContext

Inherits SpanContext methods, plus:

  • .model(model_name) - Set model identifier
  • .vendor(vendor) - Set model vendor
  • .usage(dict) - Set token usage ({"input_tokens": 150, "output_tokens": 50, "total_tokens": 200})
  • .messages(list) - Set message history

Integrations

Enable auto-instrumentation by setting the vendors parameter:

Pydantic AI

verse.init(app_name="my-app", vendors=["pydantic_ai"], exporters=[...])

agent = Agent("openai:gpt-4")
result = await agent.run("query")  # Automatically traced

# Streaming is also supported
async for event in agent.run_stream("query"):  # Automatically traced
    print(event)

Supported:

  • Agent.run() - Synchronous and asynchronous runs are automatically traced
  • Agent.run_stream() - Streaming runs are automatically traced with delta events

LiteLLM

verse.init(app_name="my-app", vendors=["litellm"], exporters=[...])

from litellm import completion
response = completion(model="gpt-4", messages=[...])  # Automatically traced

Supported:

  • completion() - Synchronous and asynchronous completions are automatically traced
  • ✅ Streaming completions - Stream events are captured and traced
  • ✅ Embeddings - Embedding operations are automatically traced

LangChain

verse.init(app_name="my-app", vendors=["langchain"], exporters=[...])
# Your LangChain code is automatically traced

Supported:

  • ✅ Chat models - ChatModel invocations (synchronous and asynchronous) are automatically traced
  • ✅ LLM completions - LLM invocations (synchronous and asynchronous) are automatically traced
  • ✅ Streaming - Streaming token events are captured via on_llm_new_token
  • ✅ Embeddings - Embedding operations are automatically traced

Anthropic

verse.init(app_name="my-app", vendors=["anthropic"], exporters=[...])

from anthropic import Anthropic
client = Anthropic()
response = client.messages.create(model="claude-3-5-sonnet-20241022", messages=[...])  # Automatically traced

Supported:

  • Messages.create() - Synchronous message creation is automatically traced
  • AsyncMessages.create() - Asynchronous message creation is automatically traced
  • ✅ Streaming - Streaming responses are automatically traced with delta events

Google (Gemini)

verse.init(app_name="my-app", vendors=["google"], exporters=[...])

from google.genai import Client
client = Client()
response = client.models.generate_content(model="gemini-pro", contents=[...])  # Automatically traced

Supported:

  • Models.generate_content() - Synchronous content generation is automatically traced
  • Models.generate_content_stream() - Streaming content generation is automatically traced

Agent Fixtures and Examples

The repository includes complete working examples of agents using each supported integration. These fixtures demonstrate best practices for setting up tools, handling streaming, and managing complex agent workflows.

Available Agent Examples

Anthropic Agent

  • Tool calling with function execution
  • Streaming responses with tool support
  • Multi-turn conversations with tool results

Google (Gemini) Agent

  • Function calling with Google GenAI
  • Streaming content generation
  • Tool execution and response handling

LangChain Agent

  • ChatOpenAI with tool binding
  • Streaming with tool calls
  • Agent executor patterns

LiteLLM Agent and LiteLLM Utilities

  • Complete tool calling setup with LiteLLM
  • Function schema generation
  • Async completion with tools
  • Tool execution loop implementation

Pydantic AI Agent

  • Agent with tool definitions
  • Streaming with Pydantic AI
  • Type-safe tool calling

Using the Examples

These fixtures are fully functional and can be used as templates for your own agents:

# Example: Using the LiteLLM agent fixture
from tests.fixtures.agents.litellm_agent import LiteLLMAgent

agent = LiteLLMAgent()
response = await agent.ask("What's the weather in San Francisco?")

Each agent fixture includes:

  • Complete setup and initialization
  • Tool/function definitions
  • Streaming support
  • Error handling
  • Integration with Verse SDK tracing

API Reference

Decorators

@observe(name=None, type=str capture_input=True, capture_output=True, capture_metadata=True, observation_type=str, **attrs)

Creates an observation.

Context Managers

verse.trace(name, session_id=None, user_id=None, scope=None, tags=None, metadata=None, **attrs)

Returns TraceContext

verse.span(name, input=None, output=None, level=None, op=None, status_message=None, metadata=None, **attrs)

Returns SpanContext

verse.generation(name, model=None, vendor=None, input=None, output=None, messages=None, usage=None, **attrs)

Returns GenerationContext

Validation

OtelGenAISpecCheck(raw_span)

A specification checker that validates spans against the OpenTelemetry GenAI semantic conventions. This tool helps ensure your LLM tracing data is healthy, compliant, and complete.

Purpose

The OtelGenAISpecCheck class:

  • Validates required attributes - Identifies missing required fields that could break downstream processing
  • Detects deprecated attributes - Flags attributes that should be migrated to newer conventions
  • Checks attribute values - Ensures values conform to allowed enums and formats
  • Highlights recommendations - Surfaces optional attributes that improve trace quality
  • Filters by context - Only validates attributes relevant to the specific provider and operation

Basic Usage

from verse_sdk.spec import OtelGenAISpecCheck

# Validate a span
checker = OtelGenAISpecCheck(span_data)

# Check for issues
if checker.has_errors:
    print("Span has validation errors")

if checker.could_improve:
    print("Span is missing recommended attributes")

if checker.has_deprecations:
    print("Span uses deprecated attributes")

Validation Methods

Checking Overall Status:

  • has_errors - Returns True if span has required attributes missing or invalid values
  • could_improve - Returns True if span is missing recommended (but not required) attributes
  • has_deprecations - Returns True if span uses deprecated attributes

Checking Specific Attributes:

  • is_attribute_valid(name) - Check if a specific attribute is valid
  • is_attribute_invalid(name) - Check if a specific attribute has errors
  • is_attribute_missing(name) - Check if a specific attribute is missing
  • is_attribute_deprecated(name) - Check if a specific attribute is deprecated

Extracting Data:

  • extract_metadata() - Get structured metadata (model, session, trace IDs, etc.)
  • extract_session_id() - Get session ID from standard or legacy attribute names
  • get_attribute_value(name, default) - Get any span attribute value
  • get_span_context_value(name, default) - Get span context fields (span_id, trace_id)
  • get_span_root_value(name, default) - Get root-level span fields (start_time, end_time)

Example: Validating Exported Spans

from verse_sdk import verse
from verse_sdk.spec import OtelGenAISpecCheck

# Custom exporter that validates spans
class ValidatingExporter:
    def export(self, spans):
        for span in spans:
            checker = OtelGenAISpecCheck(span)

            if checker.has_errors:
                print(f"ERROR: Span {checker.extract_metadata().span_id} has validation errors")

            if checker.could_improve:
                print(f"WARNING: Span could be improved with recommended attributes")

            if checker.is_attribute_deprecated("gen_ai.system"):
                print("INFO: Migrate 'gen_ai.system' to 'gen_ai.provider.name'")

verse.init(
    app_name="my-app",
    exporters=[ValidatingExporter()]
)

Example: Extracting Metadata

from verse_sdk.spec import OtelGenAISpecCheck

checker = OtelGenAISpecCheck(span_data)
metadata = checker.extract_metadata()

print(f"Model: {metadata.model}")
print(f"Provider: {metadata.model_provider}")
print(f"Session: {metadata.session_id}")
print(f"Trace: {metadata.trace_id}")
print(f"Environment: {metadata.environment}")

Helper Functions

get_current_trace_context() -> TraceContext

Get the current trace context from active span. Raises ValueError if unavailable.

get_current_span_context() -> SpanContext

Get the current span context from active span. Raises ValueError if unavailable.

get_current_generation_context() -> GenerationContext

Get the current generation context from active span. Raises ValueError if unavailable.

Data Formats

Usage:

{"input_tokens": 150, "output_tokens": 50, "total_tokens": 200}

Score:

{"name": "quality", "value": 0.95, "comment": "Excellent"}

Best Practices

  1. Use decorators by default - Cleaner code with automatic input/output capture
  2. Use context managers for fine-grained control - When you need dynamic names or conditional logic
  3. Combine both approaches - Decorators for functions, context managers within functions
  4. Choose appropriate observation types:
    • Trace: Top-level workflows
    • Span: Sub-operations, processing steps
    • Generation: LLM API calls
    • Tool: Function/tool calls in agent systems
  5. Enable auto-instrumentation - Set vendors parameter for supported frameworks
  6. Use scope filtering - Route traces to different exporters by scope

Shutdown

verse.shutdown()  # Flush all traces before exit

Support

For issues and questions, please open an issue on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

verse_sdk-0.3.0b5.tar.gz (85.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

verse_sdk-0.3.0b5-py3-none-any.whl (69.0 kB view details)

Uploaded Python 3

File details

Details for the file verse_sdk-0.3.0b5.tar.gz.

File metadata

  • Download URL: verse_sdk-0.3.0b5.tar.gz
  • Upload date:
  • Size: 85.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for verse_sdk-0.3.0b5.tar.gz
Algorithm Hash digest
SHA256 6dca2a7180cded7bc3719c3a141d6c07bb753113d18fa613688d3b1e5596cdb3
MD5 c7b56aac7218aca8bd94721ca006bb97
BLAKE2b-256 368a205313d9c9979f74242e64d99db98855394c87aa5ba01a595790089a89ce

See more details on using hashes here.

File details

Details for the file verse_sdk-0.3.0b5-py3-none-any.whl.

File metadata

  • Download URL: verse_sdk-0.3.0b5-py3-none-any.whl
  • Upload date:
  • Size: 69.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for verse_sdk-0.3.0b5-py3-none-any.whl
Algorithm Hash digest
SHA256 0affd3e9077c200295702c2ff9763398b28b89457ebc8818f695f6b8386b669a
MD5 d5c903f018e72a5ccb960c2fd3ac6b76
BLAKE2b-256 bf3cd25b45e068908e63205b19b3ef232c3a6c0b29d25a6c2b42ce6d1cd6752e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page