A Python package for Veris AI tools
Project description
Veris AI Python SDK
Tool mocking and MCP server integration for AI agent simulation. For more information visit veris.ai.
Quick Start
Install the SDK:
uv add veris-ai --extra fastapi --extra agents
Minimal Example
from fastapi import FastAPI
from pydantic import BaseModel
from agents import Agent, function_tool
from veris_ai import veris, Runner
app = FastAPI()
# 1. Define tools with @veris.mock() - returns simulated responses during simulation
@function_tool
@veris.mock()
def get_user(user_id: str) -> dict:
"""Get user information."""
return db.get_user(user_id)
# 2. Create an agent with the tools
agent = Agent(name="Assistant", model="gpt-4", tools=[get_user])
# 3. Expose an endpoint that invokes the agent
class ChatRequest(BaseModel):
message: str
@app.post("/chat")
async def chat(req: ChatRequest):
result = await Runner.run(agent, req.message)
return {"response": result.final_output}
# 4. Set up the MCP server
veris.set_fastapi_mcp(fastapi=app)
veris.fastapi_mcp.mount_http()
Your FastAPI app now exposes an MCP server at /mcp. During Veris simulations, decorated tools return simulated responses; in production, they execute normally.
Installation
# Base package
uv add veris-ai
# With FastAPI MCP support
uv add veris-ai --extra fastapi
# With OpenAI agents support
uv add veris-ai --extra agents
# All extras
uv add veris-ai --extra dev --extra fastapi --extra observability --extra agents
Configuration
| Variable | Purpose | Default |
|---|---|---|
VERIS_API_KEY |
API authentication key | None |
VERIS_MOCK_TIMEOUT |
Request timeout (seconds) | 90.0 |
MCP Server
The SDK wraps fastapi-mcp to expose your FastAPI endpoints as MCP tools with automatic session handling.
Basic Setup
from fastapi import FastAPI
from veris_ai import veris
app = FastAPI()
# Minimal setup
veris.set_fastapi_mcp(fastapi=app)
veris.fastapi_mcp.mount_http()
Configuration Options
veris.set_fastapi_mcp(
fastapi=app,
# Server metadata
name="My API Server",
description="API for user management",
# Filter which endpoints become MCP tools
include_operations=["get_user", "update_user"], # Only these operation IDs
# OR
exclude_operations=["internal_cleanup"], # All except these
# Response schema options
describe_all_responses=True, # Include all response schemas
describe_full_response_schema=True, # Include full JSON schema
)
Filtering Rules:
- Cannot use both
include_operationsandexclude_operations
Accessing the MCP Server
The MCP server is available at /mcp on your FastAPI base URL:
- Local:
http://localhost:8000/mcp - Production:
https://your-api.com/mcp
Tool Mocking
Decorators control function behavior during simulations.
@veris.mock()
Returns simulated responses when a session is active:
from veris_ai import veris
@veris.mock()
async def get_account_balance(account_id: str) -> dict:
"""Get account balance."""
return await bank_api.get_balance(account_id)
Options:
@veris.mock(
mode="tool", # "tool" (default) or "function"
expects_response=True, # Whether to wait for mock response
cache_response=False, # Cache responses for identical calls
)
@veris.spy()
Executes the original function and logs the call/response:
@veris.spy()
async def process_payment(amount: float, recipient: str) -> dict:
"""Process a payment - logs but executes normally."""
return await payment_api.send(amount, recipient)
@veris.stub()
Returns a fixed value during simulations:
@veris.stub(return_value={"status": "success", "id": "test-123"})
async def create_order(items: list) -> dict:
"""Create order - returns stub in simulation."""
return await order_api.create(items)
Decorator Behavior Summary
| Decorator | Session Active | No Session |
|---|---|---|
@veris.mock() |
Returns simulated response | Executes normally |
@veris.spy() |
Executes and logs | Executes normally |
@veris.stub() |
Returns fixed value | Executes normally |
Veris Runner
For OpenAI Agents, use the Veris Runner to intercept tool calls without modifying tool code.
Installation
uv add veris-ai --extra agents
Basic Usage
from veris_ai import Runner
from agents import Agent, function_tool
@function_tool
def calculator(x: int, y: int) -> int:
"""Add two numbers."""
return x + y
agent = Agent(
name="Assistant",
model="gpt-4",
tools=[calculator],
)
# Use Veris Runner instead of OpenAI's Runner
result = await Runner.run(agent, "What's 10 + 5?")
Selective Tool Interception
from veris_ai import Runner, VerisConfig
# Only intercept specific tools
config = VerisConfig(include_tools=["calculator", "search"])
result = await Runner.run(agent, "Calculate 2+2", veris_config=config)
# Or exclude specific tools
config = VerisConfig(exclude_tools=["get_weather"])
result = await Runner.run(agent, "Check weather", veris_config=config)
Per-Tool Configuration
from veris_ai import Runner, VerisConfig, ToolCallOptions, ResponseExpectation
config = VerisConfig(
tool_options={
"calculator": ToolCallOptions(
response_expectation=ResponseExpectation.REQUIRED,
cache_response=True,
),
"search": ToolCallOptions(
response_expectation=ResponseExpectation.NONE,
),
}
)
result = await Runner.run(agent, "Calculate and search", veris_config=config)
Development
# Install with dev dependencies
uv add veris-ai --extra dev
# Lint and format
ruff check --fix .
ruff format .
# Run tests
pytest tests/ --cov=veris_ai
# Type check
mypy src/veris_ai
License
MIT License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file veris_ai-1.21.0.tar.gz.
File metadata
- Download URL: veris_ai-1.21.0.tar.gz
- Upload date:
- Size: 147.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bef3ea5eb7b79ad8922f6c455d1d7a75b54cb5061a0add8c0aff2b1a9b96a979
|
|
| MD5 |
114106e214992ffe6683dd2e4a82d83f
|
|
| BLAKE2b-256 |
bd09a770685692c7032994323bad3a158ec479931eb7e99a227f8d1c81612b94
|
File details
Details for the file veris_ai-1.21.0-py3-none-any.whl.
File metadata
- Download URL: veris_ai-1.21.0-py3-none-any.whl
- Upload date:
- Size: 33.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.14 {"installer":{"name":"uv","version":"0.9.14","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0901cd9b0b1d144c8e18771637b73dc9220076c69dd8b4889dafbf85cb196a19
|
|
| MD5 |
4b5f6f02da9446f1af38369473a906f9
|
|
| BLAKE2b-256 |
c76622538144a29b567d0fdfa1bccbeca7744b668f521a1aa859f9dbd4652691
|