Skip to main content

High-performance unified API for all LLM providers

Project description

PyAIBridge

High-performance unified API library for all LLM providers with modern Python best practices.

Features

  • 🚀 Unified Interface: Single API for multiple LLM providers
  • High Performance: Async/await, connection pooling, HTTP/2 support
  • 🛡️ Robust Error Handling: Comprehensive exception hierarchy
  • 🔄 Smart Retries: Exponential backoff with rate limit respect
  • 📊 Built-in Metrics: Cost tracking, performance monitoring
  • 🌊 Streaming Support: Real-time response streaming
  • 🔒 Type Safety: Full type hints and validation with Pydantic
  • Well Tested: Comprehensive test coverage

Supported Providers

  • 🤖 OpenAI - GPT-4.1, GPT-4o, GPT-4-turbo, GPT-3.5-turbo, O-series reasoning models
  • 🧠 Google - Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, Gemini 1.5 series
  • 🔮 Anthropic - Claude 3 Haiku, Claude 3 Sonnet, Claude 3 Opus, Claude 3.5 Sonnet
  • 🚀 xAI - Grok Beta, Grok models
  • 🔧 More providers - Cohere, Ollama (coming soon)

Installation

pip install pyaibridge

Quick Start

import asyncio
from pyaibridge import LLMFactory, ChatRequest, Message, MessageRole, ProviderConfig

async def main():
    # Create provider
    config = ProviderConfig(api_key="your-api-key")
    provider = LLMFactory.create_provider("openai", config)
    
    # Create request
    request = ChatRequest(
        messages=[
            Message(role=MessageRole.USER, content="Hello, world!")
        ],
        model="gpt-4.1-mini",
        max_tokens=100,
    )
    
    # Generate response
    async with provider:
        response = await provider.chat(request)
        print(response.content)

asyncio.run(main())

Streaming Example

import asyncio
from pyaibridge import LLMFactory, ChatRequest, Message, MessageRole, ProviderConfig

async def main():
    config = ProviderConfig(api_key="your-api-key")
    provider = LLMFactory.create_provider("openai", config)
    
    request = ChatRequest(
        messages=[Message(role=MessageRole.USER, content="Tell me a story")],
        model="gpt-4.1-mini",
    )
    
    async with provider:
        async for chunk in provider.stream_chat(request):
            if chunk.content:
                print(chunk.content, end="", flush=True)

asyncio.run(main())

Advanced Usage

Error Handling

from pyaibridge import (
    LLMFactory, 
    ProviderConfig,
    AuthenticationError, 
    RateLimitError, 
    ProviderError
)

try:
    config = ProviderConfig(api_key="invalid-key")
    provider = LLMFactory.create_provider("openai", config)
    async with provider:
        response = await provider.chat(request)
except AuthenticationError:
    print("Invalid API key")
except RateLimitError as e:
    print(f"Rate limited. Retry after {e.retry_after} seconds")
except ProviderError as e:
    print(f"Provider error: {e.message}")

Metrics Collection

from pyaibridge.utils.metrics import metrics

# Metrics are automatically collected
config = ProviderConfig(api_key="your-key")
provider = LLMFactory.create_provider("openai", config)
async with provider:
    response = await provider.chat(request)

# Get metrics summary
summary = metrics.get_summary()
print(f"Total requests: {summary['openai']['request_count']}")
print(f"Total cost: ${summary['openai']['total_cost']:.6f}")

Cost Calculation

# Automatic cost calculation
response = await provider.chat(request)
cost = provider.calculate_cost(response.usage.dict(), response.model)
print(f"Cost: ${cost:.6f}")

Configuration

Provider Configuration

config = ProviderConfig(
    api_key="your-api-key",
    base_url="https://api.openai.com/v1",  # Custom base URL
    max_retries=3,                         # Retry attempts
    timeout=30.0,                          # Request timeout
    rate_limit=60,                         # Requests per minute
)
provider = LLMFactory.create_provider("openai", config)

Request Parameters

request = ChatRequest(
    messages=[...],
    model="gpt-4.1-mini",
    max_tokens=1000,
    temperature=0.7,
    top_p=0.9,
    frequency_penalty=0.0,
    presence_penalty=0.0,
    stop=["\\n", "END"],
    user="user-123",
    timeout=60.0,
)

Real-World Examples

Content Generation Platform

from pyaibridge import LLMFactory, Message, MessageRole, ChatRequest, ProviderConfig

async def generate_summary(posts: list) -> str:
    """Generate AI summary of Reddit discussions."""
    config = ProviderConfig(api_key="your-openai-key")
    provider = LLMFactory.create_provider("openai", config)
    
    # Prepare content for summarization
    content = "\\n".join([f"Post: {post.headline}" for post in posts[:10]])
    
    prompt = f"""
    Summarize these discussions in 2-3 sentences:
    {content}
    
    Focus on main sentiment and key themes.
    """
    
    messages = [
        Message(role=MessageRole.SYSTEM, content="You are a financial news summarizer."),
        Message(role=MessageRole.USER, content=prompt)
    ]
    
    request = ChatRequest(
        messages=messages,
        model="gpt-4.1-mini",
        temperature=0.3,
        max_tokens=100
    )
    
    async with provider:
        response = await provider.chat(request)
        return response.content.strip()

Multi-Provider Comparison

async def compare_providers():
    # Setup multiple providers
    openai_config = ProviderConfig(api_key="openai-key")
    google_config = ProviderConfig(api_key="google-key")
    
    openai_provider = LLMFactory.create_provider("openai", openai_config)
    google_provider = LLMFactory.create_provider("google", google_config)
    
    question = "What are the benefits of renewable energy?"
    messages = [Message(role=MessageRole.USER, content=question)]
    
    async with openai_provider, google_provider:
        # OpenAI response
        openai_request = ChatRequest(messages=messages, model="gpt-4.1-mini")
        openai_response = await openai_provider.chat(openai_request)
        
        # Google response
        google_request = ChatRequest(messages=messages, model="gemini-2.5-flash")
        google_response = await google_provider.chat(google_request)
        
        print("OpenAI:", openai_response.content[:100] + "...")
        print("Google:", google_response.content[:100] + "...")

Examples

Check out the examples/ directory for more examples:

  • basic_usage.py - Basic chat completion
  • streaming_example.py - Streaming responses
  • metrics_example.py - Metrics collection
  • multi_provider_comparison.py - Comparing multiple providers
  • google_usage.py - Google Gemini integration
  • openai_latest_models.py - Latest OpenAI models

Development

# Clone repository
git clone https://github.com/sixteen-dev/pyaibridge.git
cd pyaibridge

# Install with development dependencies
uv sync --dev

# Run tests
uv run pytest

# Run linting
uv run ruff check src/
uv run ruff format src/

# Run type checking
uv run mypy src/

Testing and Deployment

Automated Testing

# Run comprehensive package tests
uv run python scripts/test_package.py

# Test with real API keys (optional)
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
export CLAUDE_API_KEY="sk-ant-..."
export XAI_API_KEY="xai-..."
uv run python scripts/test_real_api.py

Automated Deployment via GitHub Actions

The repository includes automated CI/CD with GitHub Actions:

  • TestPyPI: Auto-deploys on push to develop branch
  • PyPI: Auto-deploys on GitHub release creation
  • Security: Automated security scanning and code quality checks

Setup:

  1. Configure OIDC trusted publishing on PyPI/TestPyPI
  2. Create GitHub environments: pypi, test-pypi, api-testing
  3. No API tokens needed - uses secure OIDC authentication

Deploy to TestPyPI:

git push origin develop

Deploy to PyPI:

gh release create v0.1.3 --title "Release v0.1.3"

See GITHUB_DEPLOYMENT.md for complete setup guide.

Manual Deployment

# Build package
uv build

# Test installation locally
uv pip install dist/pyaibridge-*.whl

# Deploy using scripts
uv run python scripts/deploy_testpypi.py  # TestPyPI
uv run twine upload dist/*                # PyPI

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for new functionality
  4. Ensure all tests pass (uv run pytest)
  5. Run linting (uv run ruff check src/)
  6. Submit a pull request

License

MIT License - see LICENSE file for details.

Changelog

0.1.1

  • Added Google Gemini provider support
  • Comprehensive test coverage (48 tests passing)
  • Updated to respx for HTTP mocking
  • Fixed Pydantic v2 compatibility
  • Added extensive documentation with real-world scenarios

0.1.0

  • Initial release with OpenAI provider support
  • Basic chat completion and streaming
  • Error handling and retry logic
  • Metrics collection and cost calculation
  • Type safety with Pydantic models

Documentation

Repository

Support

For questions, issues, or feature requests, please open an issue on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyaibridge-0.2.3.tar.gz (226.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyaibridge-0.2.3-py3-none-any.whl (29.5 kB view details)

Uploaded Python 3

File details

Details for the file pyaibridge-0.2.3.tar.gz.

File metadata

  • Download URL: pyaibridge-0.2.3.tar.gz
  • Upload date:
  • Size: 226.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for pyaibridge-0.2.3.tar.gz
Algorithm Hash digest
SHA256 875bf7779a914fb5b59e9b41f3f04bbf124dcffb47eb04184f515b7c888971c9
MD5 4e9df0749dd8892291414197fda7fa01
BLAKE2b-256 c1451352a674809992fc65fee677649be38340a6e76d02e1f71c2a6d0f42df21

See more details on using hashes here.

Provenance

The following attestation bundles were made for pyaibridge-0.2.3.tar.gz:

Publisher: release.yml on sixteen-dev/pyaibridge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pyaibridge-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: pyaibridge-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 29.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for pyaibridge-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c1b54ef93c2820f797e53a87520a95c4d0f435faf43bb7d9f9fc52a9b91cac2f
MD5 3da687a756112058f925caacebaf6a2d
BLAKE2b-256 5429242891f4f77fa78fddfccbb18b2f92131131eb701d25e9e6ec3f7a64ae0d

See more details on using hashes here.

Provenance

The following attestation bundles were made for pyaibridge-0.2.3-py3-none-any.whl:

Publisher: release.yml on sixteen-dev/pyaibridge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page