Skip to main content

A Python package for Veris AI tools

Project description

Veris AI Python SDK

For more information visit us at https://veris.ai

A Python package for Veris AI tools with simulation capabilities and FastAPI MCP (Model Context Protocol) integration.

Quick Reference

Purpose: Tool mocking, tracing, and FastAPI MCP integration for AI agent development
Core Components: tool_mockapi_clientobservabilityagents_wrapperfastapi_mcpjaeger_interface
Deep Dive: Module ArchitectureTesting GuideUsage Examples
Source of Truth: Implementation details in src/veris_ai/ source code

Installation

# Base package
uv add veris-ai

# With optional extras
uv add "veris-ai[dev,fastapi,observability,agents]"

Installation Profiles:

  • dev: Development tools (ruff, pytest, mypy)
  • fastapi: FastAPI MCP integration
  • observability: OpenTelemetry tracing
  • agents: OpenAI agents integration

Import Patterns

Semantic Tag: import-patterns

# Core imports (base dependencies only)
from veris_ai import veris, JaegerClient

# Optional features (require extras)
from veris_ai import init_observability, instrument_fastapi_app  # Requires observability extras
from veris_ai import Runner, VerisConfig  # Requires agents extras

Complete Import Strategies: See examples/README.md for different import approaches, conditional features, and integration patterns.

Configuration

Semantic Tag: environment-config

Variable Purpose Default
VERIS_API_KEY API authentication key None
VERIS_MOCK_TIMEOUT Request timeout (seconds) 90.0

Advanced Configuration (rarely needed):

  • VERIS_API_URL: Override default API endpoint (defaults to production)

Configuration Details: See src/veris_ai/api_client.py for API configuration and src/veris_ai/tool_mock.py for environment handling logic.

SDK Observability Helpers

The SDK provides optional-safe observability helpers that standardize OpenTelemetry setup and W3C context propagation across services.

from fastapi import FastAPI
from veris_ai import init_observability, instrument_fastapi_app

# Initialize tracing/export early (no-op if dependencies are absent)
init_observability()

app = FastAPI()

# Ensure inbound HTTP requests continue W3C traces
instrument_fastapi_app(app)

Observability Environment

Set these environment variables to enable exporting traces via OTLP (Logfire) and ensure consistent service naming:

Variable Example Notes
OTEL_SERVICE_NAME simulation-server Should match VERIS_SERVICE_NAME used elsewhere to keep traces aligned
OTEL_EXPORTER_OTLP_ENDPOINT https://logfire-api.pydantic.dev OTLP HTTP endpoint
LOGFIRE_TOKEN FILL_IN Logfire API token used by the exporter
OTEL_EXPORTER_OTLP_HEADERS 'Authorization=FILL_IN' Include quotes to preserve the =; often Authorization=Bearer <LOGFIRE_TOKEN>

Quick setup example:

export OTEL_SERVICE_NAME="simulation-server"
export OTEL_EXPORTER_OTLP_ENDPOINT="https://logfire-api.pydantic.dev"
export LOGFIRE_TOKEN="<your-token>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=${LOGFIRE_TOKEN}"

Then initialize in code early in your process:

from veris_ai import init_observability, instrument_fastapi_app
init_observability()
app = FastAPI()
instrument_fastapi_app(app)

What this enables:

  • Sets global W3C propagator (TraceContext + Baggage)
  • Optionally instruments FastAPI, requests, httpx, MCP client if installed
  • Includes request hooks to attach outbound traceparent on HTTP calls for continuity

End-to-end propagation with the simulator:

  • The simulator injects W3C headers when connecting to your FastAPI MCP endpoints
  • The SDK injects W3C headers on /v3/tool_mock and logging requests back to the simulator
  • Result: customer agent spans and tool mocks appear under the same distributed trace

Function Mocking

Semantic Tag: tool-mocking

Session-Based Activation

The SDK uses session-based activation to determine when to enable mocking. Choose one of these methods to set a session ID:

Option 1: Manual Setting

from veris_ai import veris

# Explicitly set a session ID
veris.set_session_id("your-session-id")

# Now decorated functions will use mock responses
result = await your_mocked_function()

# Clear session to disable mocking
veris.clear_session_id()

Option 2: Automatic Extraction (FastAPI MCP)

# When using FastAPI MCP integration, session IDs are 
# automatically extracted from OAuth2 bearer tokens
veris.set_fastapi_mcp(...)
# No manual session management needed

How it works internally: Regardless of which method you use, session IDs are stored in Python context variables (contextvars). This ensures proper isolation between concurrent requests and automatic propagation through the call stack.

Core Decorators

from veris_ai import veris

# Mock decorator: Returns simulated responses when session ID is set
@veris.mock()
async def your_function(param1: str, param2: int) -> dict:
    """Function documentation for LLM context."""
    return {"result": "actual implementation"}

# Spy decorator: Executes function and logs calls/responses
@veris.spy()
async def monitored_function(data: str) -> dict:
    return process_data(data)

# Stub decorator: Returns fixed value in simulation
@veris.stub(return_value={"status": "success"})
async def get_data() -> dict:
    return await fetch_from_api()

Behavior: When a session ID is set, decorators activate their respective behaviors (mock responses, logging, or stubbed values). Without a session ID, functions execute normally.

Implementation: See src/veris_ai/tool_mock.py for decorator logic and API integration.

Core Instrument

Instrument OpenAI agents without changing tool code by using the SDK's Runner (extends OpenAI's Runner) and an optional VerisConfig for fine control.

Requirements:

  • Install extras: uv add "veris-ai[agents]"
  • Set ENV=simulation to enable mock behavior (otherwise passes through)

Minimal usage:

from veris_ai import Runner

result = await Runner.run(agent, "What's 10 + 5?")

Select tools to intercept (include/exclude):

from veris_ai import Runner, VerisConfig

config = VerisConfig(include_tools=["calculator", "search_web"])  # or exclude_tools=[...]
result = await Runner.run(agent, "Process this", veris_config=config)

Per‑tool behavior via ToolCallOptions:

from veris_ai import Runner, VerisConfig, ToolCallOptions, ResponseExpectation

config = VerisConfig(
    tool_options={
        "calculator": ToolCallOptions(
            response_expectation=ResponseExpectation.REQUIRED,
            cache_response=True,
            mode="tool",
        ),
        "search_web": ToolCallOptions(
            response_expectation=ResponseExpectation.NONE,
            cache_response=False,
            mode="spy",
        ),
    }
)

result = await Runner.run(agent, "Calculate and search", veris_config=config)

Notes:

OpenAI Agents Integration

Semantic Tag: openai-agents

The SDK provides seamless integration with OpenAI's agents library through the Runner class, which extends OpenAI's Runner to intercept tool calls and route them through Veris's mocking infrastructure.

Installation

# Install with agents support
uv add "veris-ai[agents]"

Basic Usage

from veris_ai import veris, Runner, VerisConfig
from agents import Agent, function_tool

# Define your tools
@function_tool
def calculator(x: int, y: int, operation: str = "add") -> int:
    """Performs arithmetic operations."""
    # ... implementation ...

# Create an agent with tools
agent = Agent(
    name="Assistant",
    model="gpt-4",
    tools=[calculator],
    instructions="You are a helpful assistant.",
)

# Use Veris Runner instead of OpenAI's Runner
result = await Runner.run(agent, "Calculate 10 + 5")

# Or with configuration
config = VerisConfig(include_tools=["calculator"])
result = await Runner.run(agent, "Calculate 10 + 5", veris_config=config)

Selective Tool Interception

Control which tools are intercepted using VerisConfig:

from veris_ai import Runner, VerisConfig

# Only intercept specific tools
config = VerisConfig(include_tools=["calculator", "search_web"])
result = await Runner.run(agent, "Process this", veris_config=config)

# Or exclude specific tools from interception
config = VerisConfig(exclude_tools=["get_weather"])
result = await Runner.run(agent, "Check weather", veris_config=config)

Advanced Tool Configuration

Fine-tune individual tool behavior using ToolCallOptions:

from veris_ai import Runner, VerisConfig, ResponseExpectation, ToolCallOptions

# Configure specific tool behaviors
config = VerisConfig(
    tool_options={
        "calculator": ToolCallOptions(
            response_expectation=ResponseExpectation.REQUIRED,  # Always expect response
            cache_response=True,  # Cache responses for identical calls
            mode="tool"  # Use tool mode (default)
        ),
        "search_web": ToolCallOptions(
            response_expectation=ResponseExpectation.NONE,  # Don't wait for response
            cache_response=False,
            mode="spy"  # Log calls but execute normally
        )
    }
)

result = await Runner.run(agent, "Calculate and search", veris_config=config)

ToolCallOptions Parameters:

  • response_expectation: Control response behavior
    • AUTO (default): Automatically determine based on context
    • REQUIRED: Always wait for mock response
    • NONE: Don't wait for response
  • cache_response: Cache responses for identical tool calls
  • mode: Tool execution mode
    • "tool" (default): Standard tool execution
    • "function": Function mode

Key Features:

  • Drop-in replacement: Use Runner from veris_ai instead of OpenAI's Runner
  • Extends OpenAI Runner: Inherits all functionality while adding Veris capabilities
  • Automatic session management: Integrates with Veris session IDs
  • Selective mocking: Include or exclude specific tools from interception

Implementation: See src/veris_ai/agents_wrapper.py for the integration logic and examples/openai_agents_example.py for complete examples.

FastAPI MCP Integration

Semantic Tag: fastapi-mcp

Expose FastAPI endpoints as MCP tools for AI agent consumption using HTTP transport.

from fastapi import FastAPI
from veris_ai import veris

app = FastAPI()

# Enable MCP integration with HTTP transport
veris.set_fastapi_mcp(
    fastapi=app,
    name="My API Server",
    include_operations=["get_users", "create_user"],
    exclude_tags=["internal"]
)

# Mount the MCP server with HTTP transport (recommended)
veris.fastapi_mcp.mount_http()

Key Features:

  • HTTP Transport: Uses Streamable HTTP protocol for better session management
  • Automatic schema conversion: FastAPI OpenAPI → MCP tool definitions
  • Session management: Bearer token → session ID mapping
  • Filtering: Include/exclude operations and tags
  • Authentication: OAuth2 integration

Transport Protocol: The SDK uses HTTP transport (via mount_http()) which implements the MCP Streamable HTTP specification, providing robust connection handling and fixing session routing issues with concurrent connections.

Configuration Reference: See function signature in src/veris_ai/tool_mock.py for all set_fastapi_mcp() parameters.

Utility Functions

Semantic Tag: json-schema-utils

from veris_ai.utils import extract_json_schema

# Schema extraction from types
user_schema = extract_json_schema(User)  # Pydantic models
list_schema = extract_json_schema(List[str])  # Generics

Supported Types: Built-in types, generics (List, Dict, Union), Pydantic models, TypedDict, forward references.

Implementation: See src/veris_ai/utils.py for type conversion logic.

Development

Semantic Tag: development-setup

Requirements: Python 3.11+, uv package manager

# Install with dev dependencies
uv add "veris-ai[dev]"

# Quality checks
ruff check --fix .    # Lint and format
pytest --cov=veris_ai # Test with coverage

Testing & Architecture: See tests/README.md for test structure, fixtures, and coverage strategies. See src/veris_ai/README.md for module architecture and implementation flows.

Module Architecture

Semantic Tag: module-architecture

Core Modules: tool_mock (mocking), api_client (centralized API), agents_wrapper (OpenAI agents integration), jaeger_interface (trace queries), utils (schema conversion)

Complete Architecture: See src/veris_ai/README.md for module overview, implementation flows, and configuration details.

Jaeger Trace Interface

Semantic Tag: jaeger-query-api

from veris_ai.jaeger_interface import JaegerClient

client = JaegerClient("http://localhost:16686")
traces = client.search(service="veris-agent", tags={"error": "true"})

Complete Guide: See src/veris_ai/jaeger_interface/README.md for API reference, filtering strategies, and architecture details.


License: MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

veris_ai-1.11.1.tar.gz (130.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

veris_ai-1.11.1-py3-none-any.whl (27.9 kB view details)

Uploaded Python 3

File details

Details for the file veris_ai-1.11.1.tar.gz.

File metadata

  • Download URL: veris_ai-1.11.1.tar.gz
  • Upload date:
  • Size: 130.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.0

File hashes

Hashes for veris_ai-1.11.1.tar.gz
Algorithm Hash digest
SHA256 395f8b059da8179fd2cf86a446d722275449e0ae1b2268880881726f482d9684
MD5 e848732b90b7e8e7e87812d820fcd88c
BLAKE2b-256 148d509738c5eb0675d9306f35ca40edd13c3a85a7d7afcb3bd2f4a770559ded

See more details on using hashes here.

File details

Details for the file veris_ai-1.11.1-py3-none-any.whl.

File metadata

  • Download URL: veris_ai-1.11.1-py3-none-any.whl
  • Upload date:
  • Size: 27.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.0

File hashes

Hashes for veris_ai-1.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2445bab0034fd9e3a76760b57cfee78e488b1b862facb0435cbca17b3ee743b7
MD5 48a9e6ea0a697fdeab675efda364b301
BLAKE2b-256 92ab0110e246f069f2a9c321f1122c452557a4106476d0eed7ceff7093342494

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page