High-performance unified API for all LLM providers
Project description
PyAIBridge
High-performance unified API library for all LLM providers with modern Python best practices.
Features
- 🚀 Unified Interface: Single API for multiple LLM providers
- ⚡ High Performance: Async/await, connection pooling, HTTP/2 support
- 🛡️ Robust Error Handling: Comprehensive exception hierarchy
- 🔄 Smart Retries: Exponential backoff with rate limit respect
- 📊 Built-in Metrics: Cost tracking, performance monitoring
- 🌊 Streaming Support: Real-time response streaming
- 🔒 Type Safety: Full type hints and validation with Pydantic
- ✅ Well Tested: Comprehensive test coverage
Supported Providers
- 🤖 OpenAI - GPT-4.1, GPT-4o, GPT-4-turbo, GPT-3.5-turbo, O-series reasoning models
- 🧠 Google - Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, Gemini 1.5 series
- 🔮 Anthropic - Claude 3 Haiku, Claude 3 Sonnet, Claude 3 Opus, Claude 3.5 Sonnet
- 🚀 xAI - Grok Beta, Grok models
- 🔧 More providers - Cohere, Ollama (coming soon)
Installation
pip install pyaibridge
Quick Start
import asyncio
from pyaibridge import LLMFactory, ChatRequest, Message, MessageRole, ProviderConfig
async def main():
# Create provider
config = ProviderConfig(api_key="your-api-key")
provider = LLMFactory.create_provider("openai", config)
# Create request
request = ChatRequest(
messages=[
Message(role=MessageRole.USER, content="Hello, world!")
],
model="gpt-4.1-mini",
max_tokens=100,
)
# Generate response
async with provider:
response = await provider.chat(request)
print(response.content)
asyncio.run(main())
Streaming Example
import asyncio
from pyaibridge import LLMFactory, ChatRequest, Message, MessageRole, ProviderConfig
async def main():
config = ProviderConfig(api_key="your-api-key")
provider = LLMFactory.create_provider("openai", config)
request = ChatRequest(
messages=[Message(role=MessageRole.USER, content="Tell me a story")],
model="gpt-4.1-mini",
)
async with provider:
async for chunk in provider.stream_chat(request):
if chunk.content:
print(chunk.content, end="", flush=True)
asyncio.run(main())
Advanced Usage
Error Handling
from pyaibridge import (
LLMFactory,
ProviderConfig,
AuthenticationError,
RateLimitError,
ProviderError
)
try:
config = ProviderConfig(api_key="invalid-key")
provider = LLMFactory.create_provider("openai", config)
async with provider:
response = await provider.chat(request)
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
except ProviderError as e:
print(f"Provider error: {e.message}")
Metrics Collection
from pyaibridge.utils.metrics import metrics
# Metrics are automatically collected
config = ProviderConfig(api_key="your-key")
provider = LLMFactory.create_provider("openai", config)
async with provider:
response = await provider.chat(request)
# Get metrics summary
summary = metrics.get_summary()
print(f"Total requests: {summary['openai']['request_count']}")
print(f"Total cost: ${summary['openai']['total_cost']:.6f}")
Cost Calculation
# Automatic cost calculation
response = await provider.chat(request)
cost = provider.calculate_cost(response.usage.dict(), response.model)
print(f"Cost: ${cost:.6f}")
Configuration
Provider Configuration
config = ProviderConfig(
api_key="your-api-key",
base_url="https://api.openai.com/v1", # Custom base URL
max_retries=3, # Retry attempts
timeout=30.0, # Request timeout
rate_limit=60, # Requests per minute
)
provider = LLMFactory.create_provider("openai", config)
Request Parameters
request = ChatRequest(
messages=[...],
model="gpt-4.1-mini",
max_tokens=1000,
temperature=0.7,
top_p=0.9,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=["\\n", "END"],
user="user-123",
timeout=60.0,
)
Real-World Examples
Content Generation Platform
from pyaibridge import LLMFactory, Message, MessageRole, ChatRequest, ProviderConfig
async def generate_summary(posts: list) -> str:
"""Generate AI summary of Reddit discussions."""
config = ProviderConfig(api_key="your-openai-key")
provider = LLMFactory.create_provider("openai", config)
# Prepare content for summarization
content = "\\n".join([f"Post: {post.headline}" for post in posts[:10]])
prompt = f"""
Summarize these discussions in 2-3 sentences:
{content}
Focus on main sentiment and key themes.
"""
messages = [
Message(role=MessageRole.SYSTEM, content="You are a financial news summarizer."),
Message(role=MessageRole.USER, content=prompt)
]
request = ChatRequest(
messages=messages,
model="gpt-4.1-mini",
temperature=0.3,
max_tokens=100
)
async with provider:
response = await provider.chat(request)
return response.content.strip()
Multi-Provider Comparison
async def compare_providers():
# Setup multiple providers
openai_config = ProviderConfig(api_key="openai-key")
google_config = ProviderConfig(api_key="google-key")
openai_provider = LLMFactory.create_provider("openai", openai_config)
google_provider = LLMFactory.create_provider("google", google_config)
question = "What are the benefits of renewable energy?"
messages = [Message(role=MessageRole.USER, content=question)]
async with openai_provider, google_provider:
# OpenAI response
openai_request = ChatRequest(messages=messages, model="gpt-4.1-mini")
openai_response = await openai_provider.chat(openai_request)
# Google response
google_request = ChatRequest(messages=messages, model="gemini-2.5-flash")
google_response = await google_provider.chat(google_request)
print("OpenAI:", openai_response.content[:100] + "...")
print("Google:", google_response.content[:100] + "...")
Examples
Check out the examples/ directory for more examples:
basic_usage.py- Basic chat completionstreaming_example.py- Streaming responsesmetrics_example.py- Metrics collectionmulti_provider_comparison.py- Comparing multiple providersgoogle_usage.py- Google Gemini integrationopenai_latest_models.py- Latest OpenAI models
Development
# Clone repository
git clone https://github.com/sixteen-dev/pyaibridge.git
cd pyaibridge
# Install with development dependencies
uv sync --dev
# Run tests
uv run pytest
# Run linting
uv run ruff check src/
uv run ruff format src/
# Run type checking
uv run mypy src/
Testing and Deployment
Automated Testing
# Run comprehensive package tests
uv run python scripts/test_package.py
# Test with real API keys (optional)
export OPENAI_API_KEY="sk-..."
export GOOGLE_API_KEY="AIza..."
export CLAUDE_API_KEY="sk-ant-..."
export XAI_API_KEY="xai-..."
uv run python scripts/test_real_api.py
Automated Deployment via GitHub Actions
The repository includes automated CI/CD with GitHub Actions:
- TestPyPI: Auto-deploys on push to
developbranch - PyPI: Auto-deploys on GitHub release creation
- Security: Automated security scanning and code quality checks
Setup:
- Configure OIDC trusted publishing on PyPI/TestPyPI
- Create GitHub environments:
pypi,test-pypi,api-testing - No API tokens needed - uses secure OIDC authentication
Deploy to TestPyPI:
git push origin develop
Deploy to PyPI:
gh release create v0.1.3 --title "Release v0.1.3"
See GITHUB_DEPLOYMENT.md for complete setup guide.
Manual Deployment
# Build package
uv build
# Test installation locally
uv pip install dist/pyaibridge-*.whl
# Deploy using scripts
uv run python scripts/deploy_testpypi.py # TestPyPI
uv run twine upload dist/* # PyPI
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Add tests for new functionality
- Ensure all tests pass (
uv run pytest) - Run linting (
uv run ruff check src/) - Submit a pull request
License
MIT License - see LICENSE file for details.
Changelog
0.1.1
- Added Google Gemini provider support
- Comprehensive test coverage (48 tests passing)
- Updated to respx for HTTP mocking
- Fixed Pydantic v2 compatibility
- Added extensive documentation with real-world scenarios
0.1.0
- Initial release with OpenAI provider support
- Basic chat completion and streaming
- Error handling and retry logic
- Metrics collection and cost calculation
- Type safety with Pydantic models
Documentation
- Testing Guide: TESTING.md - Testing with real APIs and TestPyPI
- Deployment Guide: GITHUB_DEPLOYMENT.md - GitHub Actions CI/CD
- OIDC Setup: OIDC_SETUP.md - Secure deployment setup
- Full Documentation: DOCUMENTATION.md - Complete API reference
Repository
Support
For questions, issues, or feature requests, please open an issue on GitHub.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyaibridge-0.2.2.tar.gz.
File metadata
- Download URL: pyaibridge-0.2.2.tar.gz
- Upload date:
- Size: 215.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
572c79788777353bb72589c6065a5624a520734cc3f77b7bd289f3e5ace0685f
|
|
| MD5 |
6976250eab40eaaed30c7fafc7fc1b63
|
|
| BLAKE2b-256 |
a39e1ca52f65dddd17e7aca84160a1827578044cd46ef06852eb4bed73742fc8
|
Provenance
The following attestation bundles were made for pyaibridge-0.2.2.tar.gz:
Publisher:
release.yml on sixteen-dev/pyaibridge
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pyaibridge-0.2.2.tar.gz -
Subject digest:
572c79788777353bb72589c6065a5624a520734cc3f77b7bd289f3e5ace0685f - Sigstore transparency entry: 289722243
- Sigstore integration time:
-
Permalink:
sixteen-dev/pyaibridge@2a0f1082365c334e66040c38944fe5adc830e46e -
Branch / Tag:
refs/heads/main - Owner: https://github.com/sixteen-dev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@2a0f1082365c334e66040c38944fe5adc830e46e -
Trigger Event:
push
-
Statement type:
File details
Details for the file pyaibridge-0.2.2-py3-none-any.whl.
File metadata
- Download URL: pyaibridge-0.2.2-py3-none-any.whl
- Upload date:
- Size: 29.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0fe259e86c09919cb23286d39d11bdea7767b7c20b01e8a38e04c8830565acfc
|
|
| MD5 |
772b3dcf019cb3f959e19daf453fc47b
|
|
| BLAKE2b-256 |
b98f4426651f10e9012d381b7596a7510e1daf859634bc09e833df6efc302bb1
|
Provenance
The following attestation bundles were made for pyaibridge-0.2.2-py3-none-any.whl:
Publisher:
release.yml on sixteen-dev/pyaibridge
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pyaibridge-0.2.2-py3-none-any.whl -
Subject digest:
0fe259e86c09919cb23286d39d11bdea7767b7c20b01e8a38e04c8830565acfc - Sigstore transparency entry: 289722275
- Sigstore integration time:
-
Permalink:
sixteen-dev/pyaibridge@2a0f1082365c334e66040c38944fe5adc830e46e -
Branch / Tag:
refs/heads/main - Owner: https://github.com/sixteen-dev
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@2a0f1082365c334e66040c38944fe5adc830e46e -
Trigger Event:
push
-
Statement type: