Core Airbender client SDK for Python
Project description
Airbender Python Client Library
The core Python SDK for integrating AI applications with Airbender's real-time governance and monitoring platform.
Overview
The Airbender Python client provides comprehensive instrumentation for LLM calls, enabling real-time monitoring, session management, and feedback collection. It supports multiple AI providers through a unified interface while maintaining detailed logs in the Airbender dashboard.
Features
- Multi-Provider Support: Unified API across OpenAI, Anthropic, and Google AI
- Session Management: Track related LLM interactions across multiple calls
- Feedback Collection: Built-in user rating and comment system
- Event Logging: Detailed activity tracking for governance and analytics
- Async-First Design: Built for high-performance applications with async/await
- Type Safety: Comprehensive Pydantic models with strict validation
- Error Handling: Robust retry logic and graceful degradation
Installation
# Core client library
pip install airbender-py-client
# Provider plugins (install as needed)
pip install airbender-py-openai # For OpenAI models
pip install airbender-py-anthropic # For Claude models
pip install airbender-py-google # For Gemini models
Quick Start
Basic Setup
from airbender_py_client import create_airbender
from airbender_py_openai import init as openai_init
from airbender_py_anthropic import init as anthropic_init
from airbender_py_google import init as google_init
# Initialize client with providers
client = await create_airbender(
product_key="your-product-key",
api_base_url="https://your-dashboard.com/api/v1",
providers={
"openai": openai_init(api_key="sk-..."),
"anthropic": anthropic_init(api_key="sk-ant-..."),
"google": google_init(api_key="...")
}
)
# Generate text using any provider
response = await client.generate_text(
airbender_agent="chat-interaction",
model="gpt-4o", # Automatically selects OpenAI provider
messages=[{"role": "user", "content": "Hello, world!"}],
temperature=0.7
)
print(response.text) # AI-generated response
print(response.usage.total_tokens) # Token usage information
# Submit user feedback
await client.send_feedback(
rating=5,
comment="Great response!",
log_id=response.log_id
)
Environment Configuration
import os
from dotenv import load_dotenv
load_dotenv()
# Required environment variables
AIRBENDER_API_BASE_URL = os.getenv("AIRBENDER_API_BASE_URL")
AIRBENDER_PRODUCT_KEY = os.getenv("AIRBENDER_PRODUCT_KEY")
# Provider API keys
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
Core Concepts
Sessions
Sessions group related LLM interactions, enabling context tracking and conversation management.
# Create a new session
client = await create_airbender(
product_key="your-product-key",
providers={...}
)
# Resume an existing session
client = await resume_airbender(
session_id="existing-session-id",
product_key="your-product-key",
providers={...}
)
# Access session information
print(client.session.session_id)
print(client.session.product_key)
Unified Message Interface
The ChatMessage class provides a unified interface that works across all providers, automatically handling format conversion:
from airbender_py_client.models import ChatMessage
# Create messages using the unified interface
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="Hello, how are you?"),
ChatMessage(role="assistant", content="I'm doing well, thank you!")
]
# Works with any provider - automatic format conversion
response = await client.generate_text(
airbender_agent="chat",
model="gpt-4o", # OpenAI format
messages=messages
)
response = await client.generate_text(
airbender_agent="chat",
model="claude-3-haiku-20240307", # Converts to Anthropic format
messages=messages
)
response = await client.generate_text(
airbender_agent="chat",
model="gemini-1.5-flash-8b", # Converts to Google format
messages=messages
)
Message Format Conversion:
- OpenAI:
role: "user|assistant|system"(native format) - Anthropic:
role: "human|assistant|system"(user → human) - Google:
role: "user|model|system"(assistant → model)
Legacy Support:
# Also supports raw dictionaries (for backward compatibility)
messages = [
{"role": "user", "content": "Hello"}
]
Providers & Auto-Fallback
Providers handle AI model integration through a consistent interface with automatic provider selection and fallback capabilities:
# Automatic provider selection by model name
response = await client.generate_text(
airbender_agent="assistant",
model="gpt-4o", # Automatically selects OpenAI provider
messages=[{"role": "user", "content": "Hello"}]
)
response = await client.generate_text(
airbender_agent="assistant",
model="claude-3-haiku-20240307", # Automatically selects Anthropic provider
messages=[{"role": "user", "content": "Hello"}]
)
# Explicit provider selection
from airbender_py_client.models import ModelReference
response = await client.generate_text(
airbender_agent="assistant",
model=ModelReference(provider="google", model_id="gemini-1.5-flash-8b"),
messages=[{"role": "user", "content": "Hello"}]
)
Auto-Fallback System: The client automatically finds the right provider for any model and can fallback across providers:
# If you have multiple providers configured
client = await create_airbender(
product_key="your-key",
providers={
"openai": openai_init(api_key="sk-..."),
"anthropic": anthropic_init(api_key="sk-ant-..."),
"google": google_init(api_key="...")
}
)
# Just specify the model - automatic provider selection
response = await client.generate_text(
airbender_agent="chat",
model="gpt-4o", # Finds OpenAI provider automatically
messages=messages
)
# Works even if you don't know which provider has the model
response = await client.generate_text(
airbender_agent="chat",
model="gemini-2.0-flash", # Finds Google provider automatically
messages=messages
)
# Error handling with helpful model suggestions
try:
response = await client.generate_text(
airbender_agent="chat",
model="unknown-model",
messages=messages
)
except ValueError as e:
# Shows available models: "Available models: {'openai': ['gpt-4o', ...], ...}"
print(f"Model error: {e}")
Supported Models
OpenAI Provider:
gpt-4ogpt-4o-minigpt-4-turbo
Anthropic Provider:
claude-3-haiku-20240307claude-3-7-sonnet-20250219
Google Provider:
gemini-1.5-flash-8bgemini-1.5-pro-latestgemini-2.0-flashgemini-2.0-flash-lite
API Reference
AirbenderClient
The main client class for interacting with AI providers.
generate_text()
Generate text using any configured provider.
async def generate_text(
self,
airbender_agent: str,
model: Union[str, ModelReference],
messages: List[Dict[str, str]],
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
**kwargs
) -> TextGenerationResponse
Parameters:
airbender_agent: Identifier for the AI agent/use casemodel: Model name (string) or ModelReference for explicit provider selectionmessages: List of conversation messages in OpenAI formattemperature: Sampling temperature (0.0-2.0)max_tokens: Maximum tokens to generate**kwargs: Additional model-specific parameters
Returns:
TextGenerationResponsewith generated text, usage stats, and metadata
send_feedback()
Submit user feedback for a generated response.
async def send_feedback(
self,
rating: int,
comment: Optional[str] = None,
log_id: Optional[str] = None,
update_id: Optional[str] = None
) -> None
Parameters:
rating: User rating (1-5 scale)comment: Optional feedback commentlog_id: Log entry ID to associate feedback withupdate_id: Alternative update ID for feedback association
Factory Functions
create_airbender()
Create a new Airbender client with a fresh session.
async def create_airbender(
product_key: str,
api_base_url: Optional[str] = None,
providers: Optional[Dict[str, Any]] = None,
session_config: Optional[Dict[str, Any]] = None
) -> AirbenderClient
resume_airbender()
Resume an existing session with a known session ID.
async def resume_airbender(
session_id: str,
product_key: str,
api_base_url: Optional[str] = None,
providers: Optional[Dict[str, Any]] = None
) -> AirbenderClient
Data Models
ChatMessage
Universal chat message format that works across all providers.
class ChatMessage(BaseModel):
role: Literal["system", "user", "assistant", "model", "function", "tool"]
content: str | list[dict[str, Any]] # String or structured content
name: str | None = None # For function/tool messages
function_call: dict[str, Any] | None = None
tool_calls: list[dict[str, Any]] | None = None
def to_provider_format(self, provider: str) -> dict[str, Any]:
"""Convert to provider-specific format."""
# Handles role conversion: user↔human, assistant↔model
@classmethod
def from_dict(cls, data: dict[str, Any]) -> "ChatMessage":
"""Create from dictionary with validation."""
Example usage:
from airbender_py_client.models import ChatMessage
# Create messages that work with any provider
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content="What's the weather like?"),
ChatMessage(
role="assistant",
content="I'd need your location to check the weather."
)
]
# Automatically converts to provider format
response = await client.generate_text(
airbender_agent="weather-bot",
model="gpt-4o", # OpenAI: roles stay as-is
messages=messages
)
response = await client.generate_text(
airbender_agent="weather-bot",
model="claude-3-haiku-20240307", # Anthropic: user→human, assistant→assistant
messages=messages
)
response = await client.generate_text(
airbender_agent="weather-bot",
model="gemini-1.5-flash-8b", # Google: assistant→model
messages=messages
)
TextGenerationResponse
Response from text generation operations.
@dataclass
class TextGenerationResponse:
text: str # Generated text content
log_id: str # Airbender log entry ID
update_id: str # Update identifier for feedback
usage: UsageStats # Token usage statistics
provider: str # Provider that handled the request
model: str # Model used for generation
finish_reason: Optional[str] # Generation stop reason
UsageStats
Token usage information.
@dataclass
class UsageStats:
prompt_tokens: int # Tokens in the input
completion_tokens: int # Tokens in the output
total_tokens: int # Total tokens used
SessionInfo
Session metadata and state.
@dataclass
class SessionInfo:
session_id: str # Unique session identifier
product_key: str # Associated product key
created_at: datetime # Session creation timestamp
metadata: Dict[str, Any] # Additional session data
Configuration
Environment Variables
# Required
AIRBENDER_API_BASE_URL=https://your-dashboard.com/api/v1
AIRBENDER_PRODUCT_KEY=your-product-key
# Provider API Keys (as needed)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
# Optional
AIRBENDER_MAX_RETRIES=3
AIRBENDER_TIMEOUT=30
Programmatic Configuration
client = await create_airbender(
product_key=config.product_key,
api_base_url=config.api_base_url,
providers={...}
# Add any other config options here
max_retries=3,
timeout=30.0,
debug=True
)
Advanced Usage
Session Management
# Create client with session metadata
client = await create_airbender(
product_key="your-product-key",
providers={...},
session_config={
"user_id": "user123",
"conversation_type": "support_chat",
"metadata": {"channel": "web", "version": "1.0"}
}
)
# Access session information
session = client.session
print(f"Session ID: {session.session_id}")
print(f"Created: {session.created_at}")
Concurrent Operations
import asyncio
# Generate multiple responses concurrently
tasks = [
client.generate_text(
airbender_agent="summarizer",
model="gpt-4o-mini",
messages=[{"role": "user", "content": f"Summarize: {doc}"}]
)
for doc in documents
]
responses = await asyncio.gather(*tasks)
# Process results
for i, response in enumerate(responses):
print(f"Summary {i+1}: {response.text}")
print(f"Tokens used: {response.usage.total_tokens}")
Development
Building from Source
# Clone repository
git clone https://github.com/your-org/airbender.git
cd airbender
# Install dependencies with uv
uv sync
# Run tests
uv run pytest libs/airbender-py/client/tests/
# Build package
uv build libs/airbender-py/client/
Architecture
Design Principles
- Async-First: All I/O operations use async/await for optimal performance
- Type Safety: Comprehensive Pydantic models with strict validation
- Error Resilience: Exponential backoff retry logic with circuit breaking
- Provider Abstraction: Unified interface across different AI providers
- Observability: Detailed logging and metrics for monitoring
- Testability: Comprehensive test coverage with mocked dependencies
Integration Examples
Streamlit Application
import streamlit as st
from airbender_py_client import create_airbender
from airbender_py_openai import init as openai_init
@st.cache_resource
def get_client():
return asyncio.run(create_airbender(
product_key=st.secrets["AIRBENDER_PRODUCT_KEY"],
providers={"openai": openai_init(api_key=st.secrets["OPENAI_API_KEY"])}
))
# UI
st.title("AI Chat Assistant")
user_input = st.text_input("Enter your message:")
if st.button("Send") and user_input:
client = get_client()
with st.spinner("Generating response..."):
response = asyncio.run(client.generate_text(
airbender_agent="streamlit-chat",
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
))
st.write(response.text)
# Feedback
rating = st.selectbox("Rate this response:", [5, 4, 3, 2, 1])
if st.button("Submit Feedback"):
asyncio.run(client.send_feedback(rating=rating, log_id=response.log_id))
st.success("Feedback submitted!")
FastAPI Integration
from fastapi import FastAPI, HTTPException
from airbender_py_client import create_airbender
from airbender_py_openai import init as openai_init
app = FastAPI()
# Initialize client at startup
@app.on_event("startup")
async def startup():
app.state.airbender = await create_airbender(
product_key="your-product-key",
providers={"openai": openai_init(api_key="sk-...")}
)
@app.post("/chat")
async def chat_endpoint(message: str):
try:
response = await app.state.airbender.generate_text(
airbender_agent="api-chat",
model="gpt-4o",
messages=[{"role": "user", "content": message}]
)
return {"response": response.text, "log_id": response.log_id}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.post("/feedback")
async def feedback_endpoint(log_id: str, rating: int, comment: str = None):
await app.state.airbender.send_feedback(
rating=rating,
comment=comment,
log_id=log_id
)
return {"status": "success"}
Troubleshooting
Common Issues
Authentication Errors:
# Verify product key is correct
client = await create_airbender(product_key="verify-this-key")
# Check API base URL
client = await create_airbender(
product_key="key",
api_base_url="https://correct-dashboard-url.com/api/v1"
)
Provider Errors:
# Verify provider API keys
from airbender_py_openai import init as openai_init
try:
provider = openai_init(api_key="sk-verify-this-key")
except Exception as e:
print(f"Provider initialization failed: {e}")
Network Issues:
# Increase timeouts for slow networks
from airbender_py_client.config import AirbenderConfig
config = AirbenderConfig(timeout=60.0, max_retries=5)
Debug Mode
import logging
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
# Or use environment variable
import os
os.environ["AIRBENDER_DEBUG"] = "1"
Contributing
Development Setup
# Clone repository
git clone https://github.com/your-org/airbender.git
cd airbender
# Install Python plugin for Nx
npm install -D @nxlv/python
# Sync Python dependencies
npx nx run airbender-py-client:sync
# Run tests
npx nx run airbender-py-client:test
# Run linting
npx nx run airbender-py-client:lint
Code Style
The project uses Ruff for linting and formatting:
# Lint code
npx nx run airbender-py-client:lint
# Format code
ruff format libs/airbender-py/client/src/
Testing Guidelines
- Write comprehensive tests for all public methods
- Use pytest fixtures for common test setup
- Mock external dependencies with pytest-httpx
- Aim for >90% test coverage
Resources
- Airbender Dashboard Documentation
- Provider Libraries Documentation
- Python Demo Application
- API Reference
Support
For issues and questions:
- Check the troubleshooting guide
- Review GitHub Issues
- Join the community Discord
- Contact support at support@airbender.dev
License
Licensed under the Apache License 2.0. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file airbender_py_client-0.1.0.tar.gz.
File metadata
- Download URL: airbender_py_client-0.1.0.tar.gz
- Upload date:
- Size: 17.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6d09492aaf8c2000bb65a2e0dfb95e0f35442308ac1d4ab448e5bcc7095313e5
|
|
| MD5 |
9d859af0f28f84d84bd8233f570def9d
|
|
| BLAKE2b-256 |
f4e30af53c6cfdd12903c77972c2a82961ac71a1c871a0e9b7528a5f09a7c16d
|
File details
Details for the file airbender_py_client-0.1.0-py3-none-any.whl.
File metadata
- Download URL: airbender_py_client-0.1.0-py3-none-any.whl
- Upload date:
- Size: 22.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f0f4851aaa765c49f988bc21feb412dda6f7f0b0117f24a5c3cb551cb93af41
|
|
| MD5 |
f4dbe0e171d7790fc591b35d9f7cb6d4
|
|
| BLAKE2b-256 |
f0cc0879e6485eb54787753e9d3c57115dee13dd4e9899ffe940c89b6458c20e
|