Skip to main content

Advanced Guardrails and Evaluation SDK for AI Agents

Project description

HaliosAI SDK

PyPI version Python Support License: Apache 2.0

HaliosAI : Ship Reliable AI Agents Faster! 🚀🚀🚀

HaliosAI SDK helps you catch tricky AI agent failures before they reach users. It supports both offline and live guardrail checks, streaming response validation, parallel processing, and multi-agent setups. Integration is seamless - just add a simple decorator to your code. HaliosAI instantly plugs into your agent workflows, making it easy to add safety and reliability without changing your architecture.

Features

  • 🛡️ Easy Integration: Simple decorators and patchers for existing AI agent code
  • Parallel Processing: Run guardrails and agent calls simultaneously for optimal performance
  • 🌊 Streaming Support: Real-time guardrail evaluation for streaming responses
  • 🤖 Multi-Agent Support: Per-agent guardrail profiles for complex AI systems
  • 🔧 Framework Support: Built-in support for OpenAI, Anthropic, and OpenAI Agents
  • 📊 Detailed Timing: Performance metrics and execution insights
  • 🚨 Violation Handling: Automatic blocking and detailed error reporting

Installation

pip install haliosai

For specific LLM providers:

pip install haliosai[openai]        # For OpenAI support
pip install haliosai[agents]        # For OpenAI Agents support
pip install haliosai[all]           # For all providers

Prerequisites

  1. Get your API key: Visit console.halios.ai to obtain your HaliosAI API key
  2. Create an agent: Follow the documentation to create your first agent and configure guardrails
  3. Keep your agent_id handy: You'll need it for SDK integration

Set required environment variables:

export HALIOS_API_KEY="your-api-key"
export HALIOS_AGENT_ID="your-agent-id"
export OPENAI_API_KEY="your-openai-key"  # For OpenAI examples

Quick Start

Basic Usage (Decorator Pattern)

import asyncio
import os
from openai import AsyncOpenAI
from haliosai import guarded_chat_completion, GuardrailViolation

# Validate required environment variables
REQUIRED_VARS = ["HALIOS_API_KEY", "HALIOS_AGENT_ID", "OPENAI_API_KEY"]
missing = [var for var in REQUIRED_VARS if not os.getenv(var)]
if missing:
    raise EnvironmentError(f"Missing required environment variables: {', '.join(missing)}")

HALIOS_AGENT_ID = os.getenv("HALIOS_AGENT_ID")

@guarded_chat_completion(agent_id=HALIOS_AGENT_ID)
async def call_llm(messages):
    """LLM call with automatic guardrail evaluation"""
    client = AsyncOpenAI()
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        max_tokens=100
    )
    return response

async def main():
    # Customize messages for your agent's persona
    messages = [{"role": "user", "content": "Hello, can you help me?"}]
    
    try:
        response = await call_llm(messages)
        content = response.choices[0].message.content
        print(f"✓ Response: {content}")
    except GuardrailViolation as e:
        print(f"✗ Blocked: {e.violation_type} - {len(e.violations)} violation(s)")

if __name__ == "__main__":
    asyncio.run(main())

Advanced Usage (Context Manager Pattern)

For fine-grained control over guardrail evaluation:

import asyncio
import os
from openai import AsyncOpenAI
from haliosai import HaliosGuard, GuardrailViolation

HALIOS_AGENT_ID = os.getenv("HALIOS_AGENT_ID")

async def main():
    messages = [{"role": "user", "content": "Hello, how can you help?"}]
    
    async with HaliosGuard(agent_id=HALIOS_AGENT_ID) as guard:
        try:
            # Step 1: Validate request
            await guard.validate_request(messages)
            print("✓ Request passed")
            
            # Step 2: Call LLM
            client = AsyncOpenAI()
            response = await client.chat.completions.create(
                model="gpt-4o-mini",
                messages=messages,
                max_tokens=100
            )
            
            # Step 3: Validate response
            response_message = response.choices[0].message
            full_conversation = messages + [{"role": "assistant", "content": response_message.content}]
            await guard.validate_response(full_conversation)
            
            print("✓ Response passed")
            print(f"Response: {response_message.content}")
            
        except GuardrailViolation as e:
            print(f"✗ Blocked: {e.violation_type} - {len(e.violations)} violation(s)")

if __name__ == "__main__":
    asyncio.run(main())

OpenAI Agents Framework Integration

For native integration with OpenAI Agents framework:

from openai import AsyncOpenAI
from agents import Agent
from haliosai import RemoteInputGuardrail, RemoteOutputGuardrail

# Create guardrails
input_guardrail = RemoteInputGuardrail(agent_id="your-agent-id")
output_guardrail = RemoteOutputGuardrail(agent_id="your-agent-id")

# Create agent with guardrails
agent = Agent(
    model="gpt-4o",
    instructions="You are a helpful assistant.",
    input_guardrails=[input_guardrail],
    output_guardrails=[output_guardrail]
)

# Use the agent normally - guardrails run automatically
client = AsyncOpenAI()
runner = await client.beta.agents.get_agent_runner(agent)
result = await runner.run(
    starting_agent=agent,
    input="Write a professional email"
)

Examples

Check out the examples/ directory for complete working examples.

🚀 Recommended Starting Point

06_interactive_chatbot.py - Interactive chat session

  • Works with ANY agent configuration
  • Type your own messages relevant to your agent's persona
  • See guardrails in action in real-time
  • Best way to explore the SDK!

📚 SDK Mechanics

01_basic_usage.py - Simple decorator pattern

  • Shows basic @guarded_chat_completion usage
  • Request/response guardrail evaluation
  • Exception handling

02_streaming_response_guardrails.py - Streaming responses

  • Real-time streaming with guardrails
  • Character-based and time-based buffering
  • Hybrid buffering modes

03_tool_calling_simple.py - Tool/function calling

  • Guardrails for function calling scenarios
  • Tool invocation tracking

04_context_manager_pattern.py - Manual control

  • Context manager for explicit guardrail calls
  • Separate request/response validation

05_tool_calling_advanced.py - Advanced tool calling with comprehensive guardrails

  • Request validation
  • Tool result validation (prevents data leakage)
  • Response validation
  • Context manager pattern for fine-grained control

05_openai_agents_guardrails_integration.py - OpenAI Agents framework

  • Integration with OpenAI Agents SDK
  • Multi-agent workflows

Note

Currently, HaliosAI SDK supports OpenAI and OpenAI Agents frameworks natively. Other providers (e.g. Anthropic and Gemini) can be integrated using their OpenAI-compatible APIs via OpenAI SDK. Support for additional frameworks is coming soon.

This is beta release. API and features may change. Please report any issues or feedback on GitHub.

Requirements

  • Python 3.9+
  • httpx >= 0.24.0
  • typing-extensions >= 4.0.0

Optional Dependencies

  • openai >= 1.0.0 (for OpenAI integration)
  • anthropic >= 0.25.0 (for Anthropic integration)
  • openai-agents >= 0.1.0 (for OpenAI Agents integration)

Documentation

Support

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

haliosai-1.0.4.tar.gz (42.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

haliosai-1.0.4-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file haliosai-1.0.4.tar.gz.

File metadata

  • Download URL: haliosai-1.0.4.tar.gz
  • Upload date:
  • Size: 42.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for haliosai-1.0.4.tar.gz
Algorithm Hash digest
SHA256 030332f0e5a008d72d45863823e8f923565b5f9adb68eaa7fceb9bb76cbf99e1
MD5 f0782e3a589c963bcab1febc98fd2ea0
BLAKE2b-256 3b970a4df3fec08d4b25f69827b92d41f4a88dedbec6cdbaecf9c9714efd4a52

See more details on using hashes here.

File details

Details for the file haliosai-1.0.4-py3-none-any.whl.

File metadata

  • Download URL: haliosai-1.0.4-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for haliosai-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 927ec3fb231873ce8f78b5318b33321e169898c88cb15fe88d6366880af66daa
MD5 4813750bdea52f5758e99b4a9ba8836a
BLAKE2b-256 1b87f0fdfdcc4a7ef8423e0501223105c476138bca0e688850fad56a9fd596d5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page