Agnostic LLM abstraction layer (OpenAI, Gemini, Anthropic) for Codex
Project description
codex-ai
Agnostic LLM abstraction layer for the Codex ecosystem. It gives you one async prompt pipeline for OpenAI, Gemini, Anthropic, and OpenRouter, with routing and provider failover built in.
Install
# Core only
pip install codex-ai
# Provider extras
pip install "codex-ai[openai]"
pip install "codex-ai[gemini]"
pip install "codex-ai[anthropic]"
pip install "codex-ai[openai,gemini,anthropic]"
Requires Python 3.12 or newer.
Quick Start
Define a prompt once and dispatch it through any provider:
from codex_ai import LLMDispatcher, LLMMessage, LLMRouter, PromptResult, OpenAIProvider, GeminiProvider
from codex_ai.providers import MultiLLMProvider
router = LLMRouter()
@router.prompt("chat")
async def build_chat(text: str, **kw) -> PromptResult:
return PromptResult(
messages=[LLMMessage(role="user", content=text)],
system="You are a helpful assistant.",
)
# Multiple providers with automatic failover: OpenAI → Gemini
provider = MultiLLMProvider(
providers={
"openai": OpenAIProvider(api_key="sk-..."),
"gemini": GeminiProvider(api_key="AIza..."),
},
default="openai",
failover_list=["gemini"], # if OpenAI fails — try Gemini
)
dispatcher = LLMDispatcher(provider=provider)
dispatcher.include_router(router)
# Uses default (OpenAI), falls back to Gemini on error
response = await dispatcher.process("chat", text="Hello!")
# Or pick a provider explicitly
response = await dispatcher.process("chat", text="Hello!", provider="gemini")
Modules
| Module | Extra | Description |
|---|---|---|
codex_ai.core |
— | Dispatcher, router, protocol types, sync wrapper, and shared exception contract |
codex_ai.providers.openai |
[openai] |
OpenAI Chat Completions provider |
codex_ai.providers.gemini |
[gemini] |
Google Gemini provider via google-genai |
codex_ai.providers.anthropic_ |
[anthropic] |
Anthropic Claude provider |
codex_ai.providers.openrouter |
[openai] |
OpenRouter provider built on the OpenAI-compatible SDK |
codex_ai.providers.multi |
— | Multi-provider dispatcher with failover and model-based inference |
Development
uv sync --extra dev
uv run pytest
uv run mypy src/
uv run pre-commit run --all-files
uv build --no-sources
Documentation
Full docs with architecture, API reference, and data flow diagrams:
Part of the Codex ecosystem
| Package | Role |
|---|---|
| codex-core | Foundation — immutable DTOs, PII masking, env settings |
| codex-platform | Infrastructure — Redis, Streams, ARQ workers, Notifications |
| codex-ai | LLM layer — unified async interface for OpenAI, Gemini, Anthropic |
| codex-services | Business logic — Booking engine, CRM, Calendar |
Each library is fully standalone — install only what your project needs. Together they form the backbone of codex-bot (Telegram AI-agent infrastructure built on aiogram) and codex-django (Django integration layer).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file codex_ai-0.1.0.tar.gz.
File metadata
- Download URL: codex_ai-0.1.0.tar.gz
- Upload date:
- Size: 127.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f67d83945c437fe58964210b0248b148df9bcd18aa456d59f0c0876f99aa0513
|
|
| MD5 |
20c488992633428567bdb520844f94e9
|
|
| BLAKE2b-256 |
33c58a2956b30eefe13300b02f3f4ead11c6393d9eb6dc261b09fe6c0823a721
|
Provenance
The following attestation bundles were made for codex_ai-0.1.0.tar.gz:
Publisher:
publish.yml on CodexDLC/codex-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
codex_ai-0.1.0.tar.gz -
Subject digest:
f67d83945c437fe58964210b0248b148df9bcd18aa456d59f0c0876f99aa0513 - Sigstore transparency entry: 1192873551
- Sigstore integration time:
-
Permalink:
CodexDLC/codex-ai@28b400264f7c2cabc2b9c503efc796b465ce978f -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/CodexDLC
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@28b400264f7c2cabc2b9c503efc796b465ce978f -
Trigger Event:
push
-
Statement type:
File details
Details for the file codex_ai-0.1.0-py3-none-any.whl.
File metadata
- Download URL: codex_ai-0.1.0-py3-none-any.whl
- Upload date:
- Size: 19.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
049cffa50a2c0e6be3a2146af2e4344502ccf80f595e2501ab20067afaa2f799
|
|
| MD5 |
ac30f91383c321a45dc93ac5197e5b21
|
|
| BLAKE2b-256 |
a2af1791163bdba3244a74536bab7d3e80055e1ad5c4c9d1915c8c139f39dcbe
|
Provenance
The following attestation bundles were made for codex_ai-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on CodexDLC/codex-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
codex_ai-0.1.0-py3-none-any.whl -
Subject digest:
049cffa50a2c0e6be3a2146af2e4344502ccf80f595e2501ab20067afaa2f799 - Sigstore transparency entry: 1192873628
- Sigstore integration time:
-
Permalink:
CodexDLC/codex-ai@28b400264f7c2cabc2b9c503efc796b465ce978f -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/CodexDLC
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@28b400264f7c2cabc2b9c503efc796b465ce978f -
Trigger Event:
push
-
Statement type: