Skip to main content

Codex load balancer and proxy for ChatGPT accounts with usage dashboard

Project description

codex-lb

Load balancer for ChatGPT accounts. Pool multiple accounts, track usage, manage API keys, view everything in a dashboard.

dashboard accounts
More screenshots
Settings Login
settings login
Dashboard (dark) Accounts (dark) Settings (dark)
dashboard-dark accounts-dark settings-dark

Features

Account Pooling
Load balance across multiple ChatGPT accounts
Usage Tracking
Per-account tokens, cost, 28-day trends
API Keys
Per-key rate limits by token, cost, window, model
Dashboard Auth
Password + optional TOTP
OpenAI-compatible
Codex CLI, OpenCode, any OpenAI client
Auto Model Sync
Available models fetched from upstream

Quick Start

# Docker (recommended)
docker volume create codex-lb-data
docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

# or uvx
uvx codex-lb

Open localhost:2455 → Add account → Done.

Remote Setup

When accessing the dashboard remotely for the first time, a bootstrap token is required to set the initial password.

Auto-generated (default): On first startup (no password configured), the server generates a one-time token and prints it to logs:

docker logs codex-lb
# ============================================
#   Dashboard bootstrap token (first-run):
#   <token>
# ============================================

Open the dashboard → enter the token + new password → done. The token is shared across replicas and remains valid until a password is set. In multi-replica setups, replicas must share the same encryption key (the Helm chart default) for restart recovery to work.

Manual token: To use a fixed token instead, set the env var before starting:

docker run -d --name codex-lb \
  -e CODEX_LB_DASHBOARD_BOOTSTRAP_TOKEN=your-secret-token \
  -p 2455:2455 -p 1455:1455 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

Local access (localhost) bypasses bootstrap entirely — no token needed.

Client Setup

Point any OpenAI-compatible client at codex-lb. If API key auth is enabled, pass a key from the dashboard as a Bearer token.

Logo Client Endpoint Config
OpenAI Codex CLI http://127.0.0.1:2455/backend-api/codex ~/.codex/config.toml
OpenCode OpenCode http://127.0.0.1:2455/v1 ~/.config/opencode/opencode.json
OpenClaw OpenClaw http://127.0.0.1:2455/v1 ~/.openclaw/openclaw.json
Python OpenAI Python SDK http://127.0.0.1:2455/v1 Code
OpenAICodex CLI / IDE Extension

~/.codex/config.toml:

model = "gpt-5.3-codex"
model_reasoning_effort = "xhigh"
model_provider = "codex-lb"

[model_providers.codex-lb]
name = "OpenAI"  # required — enables remote /responses/compact
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
supports_websockets = true
requires_openai_auth = true # required for codex app

Optional: enable native upstream WebSockets for Codex streaming while keeping codex-lb pooling:

export CODEX_LB_UPSTREAM_STREAM_TRANSPORT=websocket

auto is the default and uses native WebSockets for native Codex headers or models that prefer them. You can also switch this in the dashboard under Settings -> Routing -> Upstream stream transport.

Note: Codex itself does not currently expose a stable documented wire_api = "websocket" provider mode. If you want to experiment on the Codex side, the current CLI exposes under-development feature flags:

[features]
responses_websockets = true
# or
responses_websockets_v2 = true

These flags are experimental and do not replace wire_api = "responses".

If upstream websocket handshakes must use environment proxies in your deployment, set CODEX_LB_UPSTREAM_WEBSOCKET_TRUST_ENV=true. By default websocket handshakes connect directly to match Codex CLI's native transport.

With API key auth:

[model_providers.codex-lb]
name = "OpenAI"
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
env_key = "CODEX_LB_API_KEY"
supports_websockets = true
requires_openai_auth = true # required for codex app
export CODEX_LB_API_KEY="sk-clb-..."   # key from dashboard
codex

Verify WebSocket transport

Use a one-off debug run:

RUST_LOG=debug codex exec "Reply with OK only."

Healthy websocket signals:

  • CLI logs contain connecting to websocket and successfully connected to websocket
  • codex-lb logs show WebSocket /backend-api/codex/responses
  • codex-lb logs do not show fallback POST /backend-api/codex/responses for the same run

If you run codex-lb behind a reverse proxy, make sure it forwards WebSocket upgrades.

Migrating from direct OpenAIcodex resume filters by model_provider; old sessions won't appear until you re-tag them:

# JSONL session files (all versions)
find ~/.codex/sessions -name '*.jsonl' \
  -exec sed -i '' 's/"model_provider":"openai"/"model_provider":"codex-lb"/g' {} +

# SQLite state DB (>= v0.105.0, creates ~/.codex/state_*.sqlite)
sqlite3 ~/.codex/state_5.sqlite \
  "UPDATE threads SET model_provider = 'codex-lb' WHERE model_provider = 'openai';"
OpenCodeOpenCode

Important: Use the built-in openai provider with baseURL override — not a custom provider with @ai-sdk/openai-compatible. Custom providers use the Chat Completions API which drops reasoning/thinking content. The built-in openai provider uses the Responses API, which properly preserves encrypted_content and multi-turn reasoning state.

Before starting, please ensure that all existing OpenAI credentials is cleared in ~/.local/share/opencode/auth.json You can clean the config by using this one-liner jq 'del(.openai)' ~/.local/share/opencode/auth.json > auth.json.tmp && mv auth.json.tmp ~/.local/share/opencode/auth.json

~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "openai": {
      "options": {
        "baseURL": "http://127.0.0.1:2455/v1",
        "apiKey": "{env:CODEX_LB_API_KEY}"
      },
      "models": {
        "gpt-5.4": {
          "name": "GPT-5.4",
          "reasoning": true,
          "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
          "limit": { "context": 1050000, "output": 128000 }
        },
        "gpt-5.3-codex": {
          "name": "GPT-5.3 Codex",
          "reasoning": true,
          "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
          "limit": { "context": 272000, "output": 65536 }
        },
        "gpt-5.1-codex-mini": {
          "name": "GPT-5.1 Codex Mini",
          "reasoning": true,
          "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
          "limit": { "context": 272000, "output": 65536 }
        },
        "gpt-5.3-codex-spark": {
          "name": "GPT-5.3 Codex Spark",
          "reasoning": true,
          "options": { "reasoningEffort": "xhigh", "reasoningSummary": "detailed" },
          "limit": { "context": 128000, "output": 65536 }
        }
      }
    }
  },
  "model": "openai/gpt-5.3-codex"
}

This overrides the built-in openai provider's endpoint to point at codex-lb while keeping the Responses API code path that handles reasoning properly.

export CODEX_LB_API_KEY="sk-clb-..."   # key from dashboard
opencode
OpenClawOpenClaw

~/.openclaw/openclaw.json:

{
  "agents": {
    "defaults": {
      "model": { "primary": "codex-lb/gpt-5.4" },
      "models": {
        "codex-lb/gpt-5.4": { "params": { "cacheRetention": "short" } }
        "codex-lb/gpt-5.4-mini": { "params": { "cacheRetention": "short" } }
        "codex-lb/gpt-5.3-codex": { "params": { "cacheRetention": "short" } }
      }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "codex-lb": {
        "baseUrl": "http://127.0.0.1:2455/v1",
        "apiKey": "${CODEX_LB_API_KEY}",   // or "dummy" if API key auth is disabled
        "api": "openai-responses",
        "models": [
          {
            "id": "gpt-5.4",
            "name": "gpt-5.4 (codex-lb)",
            "contextWindow": 1050000,
            "contextTokens": 272000,
            "maxTokens": 4096,
            "input": ["text"],
            "reasoning": false
          },
          {
            "id": "gpt-5.4-mini",
            "name": "gpt-5.4-mini (codex-lb)",
            "contextWindow": 400000,
            "contextTokens": 272000,
            "maxTokens": 4096,
            "input": ["text"],
            "reasoning": false
          },
          {
            "id": "gpt-5.3-codex",
            "name": "gpt-5.3-codex (codex-lb)",
            "contextWindow": 400000,
            "contextTokens": 272000,
            "maxTokens": 4096,
            "input": ["text"],
            "reasoning": false
          }
        ]
      }
    }
  }
}

Set the env var or replace ${CODEX_LB_API_KEY} with a key from the dashboard. If API key auth is disabled, local requests can omit the key, but non-local requests are still rejected until proxy authentication is configured.

PythonOpenAI Python SDK
from openai import OpenAI

client = OpenAI(
    base_url="http://127.0.0.1:2455/v1",
    api_key="sk-clb-...",  # from dashboard, or any non-empty string if auth is disabled
)

response = client.chat.completions.create(
    model="gpt-5.3-codex",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

API Key Authentication

API key auth is disabled by default. In that mode, only local requests to the protected proxy routes can proceed without a key; non-local requests are rejected until proxy authentication is configured. Enable it in Settings → API Key Auth on the dashboard when clients connect remotely or through Docker, VM, or container networking that appears non-local to the service.

When enabled, clients must pass a valid API key as a Bearer token:

Authorization: Bearer sk-clb-...

The protected proxy routes covered by this setting are:

  • /v1/* (except /v1/usage, which always requires a valid key)
  • /backend-api/codex/*
  • /backend-api/transcribe

Creating keys: Dashboard → API Keys → Create. The full key is shown only once at creation. Keys support optional expiration, model restrictions, and rate limits (tokens / cost per day / week / month).

Configuration

Environment variables with CODEX_LB_ prefix or .env.local. See .env.example. SQLite is the default database backend; PostgreSQL is optional via CODEX_LB_DATABASE_URL (for example postgresql+asyncpg://...).

Dashboard authentication modes

codex-lb supports three dashboard auth modes via environment variables:

  • CODEX_LB_DASHBOARD_AUTH_MODE=standard — built-in dashboard password with optional TOTP from the Settings page.
  • CODEX_LB_DASHBOARD_AUTH_MODE=trusted_header — trust a reverse-proxy auth header such as Authelia's Remote-User, but only from CODEX_LB_FIREWALL_TRUSTED_PROXY_CIDRS. Built-in password/TOTP remain available as an optional fallback, and password/TOTP management still requires a fallback password session.
  • CODEX_LB_DASHBOARD_AUTH_MODE=disabled — fully bypass dashboard auth. Use only behind network restrictions or external auth. Built-in password/TOTP management is disabled in this mode.

trusted_header mode also requires:

CODEX_LB_FIREWALL_TRUST_PROXY_HEADERS=true
CODEX_LB_FIREWALL_TRUSTED_PROXY_CIDRS=172.18.0.0/16
CODEX_LB_DASHBOARD_AUTH_PROXY_HEADER=Remote-User

If the trusted header is missing and no fallback password is configured, the dashboard fails closed and shows a reverse-proxy-required message instead of loading the UI.

Docker examples

Authelia / trusted header

docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -e CODEX_LB_DASHBOARD_AUTH_MODE=trusted_header \
  -e CODEX_LB_DASHBOARD_AUTH_PROXY_HEADER=Remote-User \
  -e CODEX_LB_FIREWALL_TRUST_PROXY_HEADERS=true \
  -e CODEX_LB_FIREWALL_TRUSTED_PROXY_CIDRS=172.18.0.0/16 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

Hard override / no app-level dashboard auth

docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -e CODEX_LB_DASHBOARD_AUTH_MODE=disabled \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

For Helm, pass the same values through extraEnv.

Data

Environment Path
Local / uvx ~/.codex-lb/
Docker /var/lib/codex-lb/

Backup this directory to preserve your data.

Kubernetes

helm install codex-lb oci://ghcr.io/soju06/charts/codex-lb \
  --set postgresql.auth.password=changeme \
  --set config.databaseMigrateOnStartup=true \
  --set migration.schemaGate.enabled=false
kubectl port-forward svc/codex-lb 2455:2455

Open localhost:2455 → Add account → Done.

The Helm chart auto-configures HTTP /responses owner handoff for multi-replica installs using a headless-service DNS name per pod. The default cluster domain is cluster.local; set Helm clusterDomain if your cluster uses a different suffix. Override config.sessionBridgeAdvertiseBaseUrl only if pods must be reached through a different internal address.

For external database, production config, ingress, observability, and more see the Helm chart README.

Development

# Docker
docker compose watch

# Local
uv sync && cd frontend && bun install && cd ..
uv run fastapi run app/main.py --reload        # backend :2455
cd frontend && bun run dev                     # frontend :5173

Contributors ✨

Thanks goes to these wonderful people (emoji key):

Soju06
Soju06

💻 ⚠️ 🚧 🚇
Jonas Kamsker
Jonas Kamsker

💻 🐛 🚧
Quack
Quack

💻 🐛 🚧 🎨
Jill Kok, San Mou
Jill Kok, San Mou

💻 ⚠️ 🚧 🐛
PARK CHANYOUNG
PARK CHANYOUNG

📖 💻 ⚠️
Choi138
Choi138

💻 🐛 ⚠️
LYA⚚CAP⚚OCEAN
LYA⚚CAP⚚OCEAN

💻 ⚠️
Eugene Korekin
Eugene Korekin

💻 🐛 ⚠️
jordan
jordan

💻 🐛 ⚠️
DOCaCola
DOCaCola

🐛 ⚠️ 📖
JoeBlack2k
JoeBlack2k

💻 🐛 ⚠️
Peter A.
Peter A.

📖 💻 🐛
Hannah Markfort
Hannah Markfort

💻 ⚠️
mws-weekend-projects
mws-weekend-projects

💻 ⚠️
Quang Do
Quang Do

💻 ⚠️
Anand Aiyer
Anand Aiyer

🐛 💻 ⚠️
defin85
defin85

💻 🐛 ⚠️
Jacky Fong
Jacky Fong

💻 🐛 💬 🚧 ⚠️
flokosti96
flokosti96

💻 ⚠️
Woonggi Min
Woonggi Min

💻 ⚠️
Yigit Konur
Yigit Konur

🐛 💻
Ruben
Ruben

💻 ⚠️ 🐛
Steve Santacroce
Steve Santacroce

💻 ⚠️ 🐛
Hugh Do
Hugh Do

💻 ⚠️
Hubert Salwin
Hubert Salwin

💻 ⚠️
Teemu Koskinen
Teemu Koskinen

📖
Yu Peng Zheng
Yu Peng Zheng

📖 💻
embogomolov
embogomolov

💻 ⚠️
Renat Sharipov
Renat Sharipov

💻 ⚠️
Liu Rui
Liu Rui

📖
OverHash
OverHash

💻 ⚠️
Kazet
Kazet

💻 ⚠️

This project follows the all-contributors specification. Contributions of any kind welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codex_lb-1.13.0.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

codex_lb-1.13.0-py3-none-any.whl (865.5 kB view details)

Uploaded Python 3

File details

Details for the file codex_lb-1.13.0.tar.gz.

File metadata

  • Download URL: codex_lb-1.13.0.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for codex_lb-1.13.0.tar.gz
Algorithm Hash digest
SHA256 a2ef4a852a4ebd658f48079915a9cdb10f7a41c673e2e305f72c302ccc28da50
MD5 1796271998b7957cbed4090ca9765ac5
BLAKE2b-256 2f77153177b1e8a7dc7e62b4bc23c9e0840b43ec3514621fa8044a5b06d6d243

See more details on using hashes here.

Provenance

The following attestation bundles were made for codex_lb-1.13.0.tar.gz:

Publisher: release.yml on Soju06/codex-lb

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file codex_lb-1.13.0-py3-none-any.whl.

File metadata

  • Download URL: codex_lb-1.13.0-py3-none-any.whl
  • Upload date:
  • Size: 865.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for codex_lb-1.13.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c8a33a3236f1c86990386ea8db20f409f94da1c25444ab6c22e43483ffb0eb66
MD5 11b602427720bde660863dc1b3c9e4ef
BLAKE2b-256 e6ab62e903f9017612bf417ef1e80d2deea0abee41095ed77ed690f9159da7fb

See more details on using hashes here.

Provenance

The following attestation bundles were made for codex_lb-1.13.0-py3-none-any.whl:

Publisher: release.yml on Soju06/codex-lb

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page