Advanced Guardrails and Evaluation SDK for AI Agents
Project description
HaliosAI SDK
HaliosAI : Ship Reliable AI Agents Faster! 🚀🚀🚀
HaliosAI SDK helps you catch tricky AI agent failures before they reach users. It supports both offline and live guardrail checks, streaming response validation, parallel processing, and multi-agent setups. Integration is seamless - just add a simple decorator to your code. HaliosAI instantly plugs into your agent workflows, making it easy to add safety and reliability without changing your architecture.
Features
- 🛡️ Easy Integration: Simple decorators and patchers for existing AI agent code
- ⚡ Parallel Processing: Run guardrails and agent calls simultaneously for optimal performance
- 🌊 Streaming Support: Real-time guardrail evaluation for streaming responses
- 🤖 Multi-Agent Support: Per-agent guardrail profiles for complex AI systems
- 🔧 Framework Support: Built-in support for OpenAI, Anthropic, and OpenAI Agents
- 📊 Detailed Timing: Performance metrics and execution insights
- 🚨 Violation Handling: Automatic blocking and detailed error reporting
Installation
pip install haliosai
For specific LLM providers:
pip install haliosai[openai] # For OpenAI support
pip install haliosai[agents] # For OpenAI Agents support
pip install haliosai[all] # For all providers
Prerequisites
- Get your API key: Visit console.halios.ai to obtain your HaliosAI API key
- Create an agent: Follow the documentation to create your first agent and configure guardrails
- Keep your agent_id handy: You'll need it for SDK integration
Quick Start
Basic Usage
import asyncio
from haliosai import guarded_chat_completion
# Basic usage with concurrent guardrail processing (default)
@guarded_chat_completion(agent_id="your-agent-id")
async def call_llm(messages):
response = await openai_client.chat.completions.create(
model="gpt-4",
messages=messages
)
return response
# Use the guarded function
messages = [{"role": "user", "content": "Hello!"}]
response = await call_llm(messages)
Configuration
Set your API key as an environment variable:
export HALIOS_API_KEY="your-api-key"
Or pass it directly:
@guarded_chat_completion(
agent_id="your-agent-id",
api_key="your-api-key"
)
async def call_llm(messages):
# Your agent implementation
pass
OpenAI Agents Framework Integration
For native integration with OpenAI Agents framework:
from openai import AsyncOpenAI
from agents import Agent
from haliosai import RemoteInputGuardrail, RemoteOutputGuardrail
# Create guardrails
input_guardrail = RemoteInputGuardrail(agent_id="your-agent-id")
output_guardrail = RemoteOutputGuardrail(agent_id="your-agent-id")
# Create agent with guardrails
agent = Agent(
model="gpt-4o",
instructions="You are a helpful assistant.",
input_guardrails=[input_guardrail],
output_guardrails=[output_guardrail]
)
# Use the agent normally - guardrails run automatically
client = AsyncOpenAI()
runner = await client.beta.agents.get_agent_runner(agent)
result = await runner.run(
starting_agent=agent,
input="Write a professional email"
)
Examples
Check out the examples/ directory for complete working examples:
Advanced Usage
Streaming Response Guardrails Support
@guarded_chat_completion(
agent_id="your-agent-id",
streaming_guardrails=True,
stream_buffer_size=100
)
async def stream_llm_call(messages):
async for chunk in openai_client.chat.completions.create(
model="gpt-4",
messages=messages,
stream=True
):
yield chunk
# Handle streaming events
async for event in stream_llm_call(messages):
if event['type'] == 'chunk':
print(event['content'], end='')
elif event['type'] == 'violation':
print(f"Content blocked: {event['violations']}")
break
Performance Optimization
# Sequential processing (for debugging)
@guarded_chat_completion(
agent_id="your-agent-id",
concurrent_guardrail_processing=False
)
async def debug_llm_call(messages):
return await openai_client.chat.completions.create(...)
# Custom timeout settings
@guarded_chat_completion(
agent_id="your-agent-id",
guardrail_timeout=10.0 # Increase timeout for slow networks
)
async def slow_network_call(messages):
return await openai_client.chat.completions.create(...)
Error Handling
from haliosai import guarded_chat_completion, ExecutionResult
@guarded_chat_completion(agent_id="your-agent-id")
async def protected_agent_call(messages):
return await agent_call(messages)
# Better approach: Check execution result instead of catching exceptions
result = await protected_agent_call(messages)
if hasattr(result, '_halios_execution_result'):
execution_result = result._halios_execution_result
if execution_result.result == ExecutionResult.REQUEST_BLOCKED:
print(f"Request blocked: {execution_result.request_violations}")
# Handle blocked request appropriately
elif execution_result.result == ExecutionResult.RESPONSE_BLOCKED:
print(f"Response blocked: {execution_result.response_violations}")
# Handle blocked response appropriately
elif execution_result.result == ExecutionResult.SUCCESS:
print("Agent call completed successfully")
# Use the response normally
else:
# Fallback: handle the legacy ValueError approach
try:
response = await protected_agent_call(messages)
except ValueError as e:
if "blocked by guardrails" in str(e):
print(f"Content blocked: {e}")
# Handle blocked content appropriately
else:
raise
Note
Currently, HaliosAI SDK supports OpenAI and OpenAI Agents frameworks natively. Other providers (e.g., Anthropic and Gemini) can be integrated using their OpenAI-compatible APIs via OpenAI SDK. Support for additional frameworks is coming soon.
This is beta release. API and features may change. Please report any issues or feedback on GitHub.
Requirements
- Python 3.8+
- httpx >= 0.24.0
- typing-extensions >= 4.0.0
Optional Dependencies
- openai >= 1.0.0 (for OpenAI integration)
- anthropic >= 0.25.0 (for Anthropic integration)
- openai-agents >= 0.1.0 (for OpenAI Agents integration)
Documentation
- 📖 Full Documentation: docs.halios.ai
- 🚀 Getting Started Guide: Create agents and configure guardrails
- 📋 API Reference: Complete SDK documentation
- 💡 Best Practices: Performance optimization and deployment tips
Support
- 🌐 Website: halios.ai
- 📧 Email: support@halioslabs.com
- � Issues: GitHub Issues
- 💬 Discussions: GitHub Discussions
Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file haliosai-1.0.2.tar.gz.
File metadata
- Download URL: haliosai-1.0.2.tar.gz
- Upload date:
- Size: 40.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e1a48e89222a8ee16d83126d4831fd5a9e869a25f1c3b97be51d90f5b4150edd
|
|
| MD5 |
f56292795fd5887f85cbe034d2f25021
|
|
| BLAKE2b-256 |
be739c00e8eb972ce794031c9548ea122dbf65ca418007f8320c288c0341283a
|
File details
Details for the file haliosai-1.0.2-py3-none-any.whl.
File metadata
- Download URL: haliosai-1.0.2-py3-none-any.whl
- Upload date:
- Size: 28.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
30b6e4452ffb8b7f39fe4edcada5ede6785f87909486a2fdff62c07b560c98f2
|
|
| MD5 |
35f8c86bb828c17670ee4d2b17f8747b
|
|
| BLAKE2b-256 |
15ff11344f7a45b19e7ecb583620e876a8fddb20229f1a226ab5c1848419b313
|