Unified SDK for multi-provider LLM comparison (Cerebras, AWS Bedrock) with OpenAI-compatible interface.
Project description
UnifiedAI SDK
OpenAI-compatible Python SDK unifying multiple providers (Cerebras, AWS Bedrock) with Solo and Comparison modes, strict models, and built‑in telemetry.
Highlights
- OpenAI-like API:
UnifiedAI().chat.completions.create(...)(sync) andAsyncUnifiedAI(async) - Pluggable adapters: Cerebras, Bedrock (extensible)
- Modes: Solo and side‑by‑side Comparison
- Observability: structured logs, Prometheus metrics (SDK), tracing hooks
- Credentials: pass at client construction or use env
Install
From PyPI (core):
pip install unifiedai-sdk
Optional extras:
# Cerebras Cloud SDK integration
pip install "unifiedai-sdk[cerebras]"
# HTTP/2 support for httpx
pip install "unifiedai-sdk[http2]"
From GitHub (optional):
pip install git+https://github.com/<your-org-or-user>/<your-repo>.git#subdirectory=cerebras
Usage
Sync (scripts/CLI)
from unifiedai import UnifiedAI
client = UnifiedAI(
provider="cerebras",
model="llama3",
credentials={"api_key": "csk-..."}, # or set CEREBRAS_KEY in env
)
resp = client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}]
)
print(resp.choices[0].message["content"])
Async (web backends)
from unifiedai import AsyncUnifiedAI
async with AsyncUnifiedAI(provider="cerebras", model="llama3") as client:
resp = await client.chat.completions.create(
messages=[{"role": "user", "content": "Hello"}]
)
Streaming (async)
async with AsyncUnifiedAI(provider="cerebras", model="llama3") as client:
async for chunk in client.chat.completions.stream(
messages=[{"role": "user", "content": "Stream this"}]
):
print(chunk.delta.get("content", ""), end="")
Comparison (two providers)
from unifiedai import AsyncUnifiedAI
async with AsyncUnifiedAI() as client:
result = await client.compare(
providers=["cerebras", "bedrock"],
model="llama3",
messages=[{"role": "user", "content": "Compare outputs"}],
)
print(result.winner, result.comparative_metrics.speed_difference_ms)
Credentials
- Precedence: per‑provider credentials > global client credentials > environment (SDKConfig).
- Cerebras: set
CEREBRAS_KEYor passcredentials={"api_key": "..."}. - Bedrock: planned; wire
credentials_by_providersimilarly.
FastAPI demo (Swagger UI)
uvicorn apps.chat.backend:app --reload --port 8000
# Swagger UI: http://localhost:8000/docs
Project Structure
src/unifiedai/: SDK implementation (clients, adapters, models, core)examples/: usage examples (solo, streaming, comparison)apps/chat/: demo FastAPI backendtests/: unit tests
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file unifiedai_sdk-1.0.5.tar.gz.
File metadata
- Download URL: unifiedai_sdk-1.0.5.tar.gz
- Upload date:
- Size: 21.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e04f31943f099a502308c1d74ebdc6c486aa0bef880d14e82c7c0e547aec8162
|
|
| MD5 |
15313529c34ef505508152fffbcdc36f
|
|
| BLAKE2b-256 |
c95fe89f356547b97ee6a613351dd984e5016dd7d84d7f5856f163f45bc5ec6b
|
File details
Details for the file unifiedai_sdk-1.0.5-py3-none-any.whl.
File metadata
- Download URL: unifiedai_sdk-1.0.5-py3-none-any.whl
- Upload date:
- Size: 24.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e6ea1983dce5eace2d9b5358ba77ae90388e85b7d25036d97d97b692d5efd2ac
|
|
| MD5 |
1d2bc11690cac5045f5382217549de17
|
|
| BLAKE2b-256 |
70ba55aee7c1a69ec04311b0ca2f514523d9d85196f0ae0fa8769ca82bdde31e
|