Codex load balancer and proxy for ChatGPT accounts with usage dashboard
Project description
codex-lb
Load balancer for ChatGPT accounts. Pool multiple accounts, track usage, manage API keys, view everything in a dashboard.
More screenshots
| Settings | Login |
|---|---|
| Dashboard (dark) | Accounts (dark) | Settings (dark) |
|---|---|---|
Features
| Account Pooling Load balance across multiple ChatGPT accounts |
Usage Tracking Per-account tokens, cost, 28-day trends |
API Keys Per-key rate limits by token, cost, window, model |
| Dashboard Auth Password + optional TOTP |
OpenAI-compatible Codex CLI, OpenCode, any OpenAI client |
Auto Model Sync Available models fetched from upstream |
Quick Start
# Docker (recommended)
docker volume create codex-lb-data
docker run -d --name codex-lb \
-p 2455:2455 -p 1455:1455 \
-v codex-lb-data:/var/lib/codex-lb \
ghcr.io/soju06/codex-lb:latest
# or uvx
uvx codex-lb
Open localhost:2455 → Add account → Done.
Client Setup
Point any OpenAI-compatible client at codex-lb. If API key auth is enabled, pass a key from the dashboard as a Bearer token.
| Logo | Client | Endpoint | Config |
|---|---|---|---|
| Codex CLI | http://127.0.0.1:2455/backend-api/codex |
~/.codex/config.toml |
|
| OpenCode | http://127.0.0.1:2455/v1 |
~/.config/opencode/opencode.json |
|
| OpenClaw | http://127.0.0.1:2455/v1 |
~/.openclaw/openclaw.json |
|
| OpenAI Python SDK | http://127.0.0.1:2455/v1 |
Code |
Codex CLI / IDE Extension
~/.codex/config.toml:
model = "gpt-5.3-codex"
model_reasoning_effort = "xhigh"
model_provider = "codex-lb"
[model_providers.codex-lb]
name = "OpenAI" # required — enables remote /responses/compact
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
With API key auth:
[model_providers.codex-lb]
name = "OpenAI"
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
env_key = "CODEX_LB_API_KEY"
export CODEX_LB_API_KEY="sk-clb-..." # key from dashboard
codex
Migrating from direct OpenAI — codex resume filters by model_provider;
old sessions won't appear until you re-tag them:
# JSONL session files (all versions)
find ~/.codex/sessions -name '*.jsonl' \
-exec sed -i '' 's/"model_provider":"openai"/"model_provider":"codex-lb"/g' {} +
# SQLite state DB (>= v0.105.0, creates ~/.codex/state_*.sqlite)
sqlite3 ~/.codex/state_5.sqlite \
"UPDATE threads SET model_provider = 'codex-lb' WHERE model_provider = 'openai';"
OpenCode
Important: Use the built-in
openaiprovider withbaseURLoverride — not a custom provider with@ai-sdk/openai-compatible. Custom providers use the Chat Completions API which drops reasoning/thinking content. The built-inopenaiprovider uses the Responses API, which properly preservesencrypted_contentand multi-turn reasoning state.
~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"openai": {
"options": {
"baseURL": "http://127.0.0.1:2455/v1",
"apiKey": "{env:CODEX_LB_API_KEY}"
},
"models": {
"gpt-5.4": {
"name": "GPT-5.4",
"reasoning": true,
"options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
"limit": { "context": 1050000, "output": 128000 }
},
"gpt-5.3-codex": {
"name": "GPT-5.3 Codex",
"reasoning": true,
"options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
"limit": { "context": 272000, "output": 65536 }
},
"gpt-5.1-codex-mini": {
"name": "GPT-5.1 Codex Mini",
"reasoning": true,
"options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
"limit": { "context": 272000, "output": 65536 }
},
"gpt-5.3-codex-spark": {
"name": "GPT-5.3 Codex Spark",
"reasoning": true,
"options": { "reasoningEffort": "xhigh", "reasoningSummary": "detailed" },
"limit": { "context": 128000, "output": 65536 }
}
}
}
},
"model": "openai/gpt-5.3-codex"
}
This overrides the built-in openai provider's endpoint to point at codex-lb while keeping the Responses API code path that handles reasoning properly.
export CODEX_LB_API_KEY="sk-clb-..." # key from dashboard
opencode
OpenClaw
~/.openclaw/openclaw.json:
{
"agents": {
"defaults": {
"model": { "primary": "codex-lb/gpt-5.3-codex" }
}
},
"models": {
"mode": "merge",
"providers": {
"codex-lb": {
"baseUrl": "http://127.0.0.1:2455/v1",
"apiKey": "${CODEX_LB_API_KEY}", // or "dummy" if API key auth is disabled
"api": "openai-completions",
"models": [
{ "id": "gpt-5.3-codex", "name": "GPT-5.3 Codex" },
{ "id": "gpt-5.3-codex-spark", "name": "GPT-5.3 Codex Spark" }
]
}
}
}
}
Set the env var or replace ${CODEX_LB_API_KEY} with a key from the dashboard. If API key auth is disabled, any value works.
OpenAI Python SDK
from openai import OpenAI
client = OpenAI(
base_url="http://127.0.0.1:2455/v1",
api_key="sk-clb-...", # from dashboard, or any string if auth is disabled
)
response = client.chat.completions.create(
model="gpt-5.3-codex",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
API Key Authentication
API key auth is disabled by default — the proxy is open to any client. Enable it in Settings → API Key Auth on the dashboard.
When enabled, clients must pass a valid API key as a Bearer token:
Authorization: Bearer sk-clb-...
Creating keys: Dashboard → API Keys → Create. The full key is shown only once at creation. Keys support optional expiration, model restrictions, and rate limits (tokens / cost per day / week / month).
Configuration
Environment variables with CODEX_LB_ prefix or .env.local. See .env.example.
Dashboard auth is configured in Settings.
SQLite is the default database backend; PostgreSQL is optional via CODEX_LB_DATABASE_URL (for example postgresql+asyncpg://...).
Data
| Environment | Path |
|---|---|
| Local / uvx | ~/.codex-lb/ |
| Docker | /var/lib/codex-lb/ |
Backup this directory to preserve your data.
Development
# Docker
docker compose watch
# Local
uv sync && cd frontend && bun install && cd ..
uv run fastapi run app/main.py --reload # backend :2455
cd frontend && bun run dev # frontend :5173
Contributors ✨
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file codex_lb-1.3.1.tar.gz.
File metadata
- Download URL: codex_lb-1.3.1.tar.gz
- Upload date:
- Size: 2.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b1de61ae63259f1bdcc51628e797d2e7b207f6ad2ed71fea1c488afbdb4332fc
|
|
| MD5 |
e04929076797bde0f114694e724b6352
|
|
| BLAKE2b-256 |
d93fee6b0ddf0feefcd14ecfdfc64dfba71437fb65cc8ef91f01b346ee5197c2
|
Provenance
The following attestation bundles were made for codex_lb-1.3.1.tar.gz:
Publisher:
release.yml on Soju06/codex-lb
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
codex_lb-1.3.1.tar.gz -
Subject digest:
b1de61ae63259f1bdcc51628e797d2e7b207f6ad2ed71fea1c488afbdb4332fc - Sigstore transparency entry: 1074851962
- Sigstore integration time:
-
Permalink:
Soju06/codex-lb@a13cdaa009c73b9e41dadce9adf1bf8f314f1762 -
Branch / Tag:
refs/tags/v1.3.1 - Owner: https://github.com/Soju06
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a13cdaa009c73b9e41dadce9adf1bf8f314f1762 -
Trigger Event:
release
-
Statement type:
File details
Details for the file codex_lb-1.3.1-py3-none-any.whl.
File metadata
- Download URL: codex_lb-1.3.1-py3-none-any.whl
- Upload date:
- Size: 654.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cfbdb486ff5325b6f08ae978581843ee426f1475df515cff9cc717f0564e2afc
|
|
| MD5 |
de4f3ebbbc129ee21825d97eaffdb444
|
|
| BLAKE2b-256 |
8f3e83c5bb58abd79991d394d0c28582b5e736984e93944d805eef5d745869b3
|
Provenance
The following attestation bundles were made for codex_lb-1.3.1-py3-none-any.whl:
Publisher:
release.yml on Soju06/codex-lb
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
codex_lb-1.3.1-py3-none-any.whl -
Subject digest:
cfbdb486ff5325b6f08ae978581843ee426f1475df515cff9cc717f0564e2afc - Sigstore transparency entry: 1074852013
- Sigstore integration time:
-
Permalink:
Soju06/codex-lb@a13cdaa009c73b9e41dadce9adf1bf8f314f1762 -
Branch / Tag:
refs/tags/v1.3.1 - Owner: https://github.com/Soju06
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@a13cdaa009c73b9e41dadce9adf1bf8f314f1762 -
Trigger Event:
release
-
Statement type: