Skip to main content

OpenAI-compatible HTTP server backed by locally authenticated ChatGPT Codex.

Project description

codex-openai-server

In OpenAI's announcement for GPT-5.5, they said "We'll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon."

Well, soon is now.

codex_openai_server is an OpenAI-compatible FastAPI server that connects to ChatGPT Codex using the local Codex CLI auth file.

codex login [--device-auth]
docker compose up

$env:COPILOT_PROVIDER_TYPE="openai"
$env:COPILOT_PROVIDER_BASE_URL="http://127.0.0.1:8000/v1"
$env:COPILOT_PROVIDER_API_KEY="your-super-secret-key-from-.env"
$env:COPILOT_MODEL="gpt-5.5"
$env:COPILOT_PROVIDER_WIRE_API="responses"

copilot --disable-builtin-mcps -p "Who are you?"
● Checking my documentation
  └ # GitHub Copilot CLI Documentation

I’m **GitHub Copilot CLI**, a terminal-native AI coding assistant. I can help build, edit, debug, refactor, and understand code from your command line, with GitHub and MCP-powered integrations.

I’m powered by **gpt-5.5** in this session.


Changes   +0 -0
Duration  16s
Tokens    ↑ 67.0k • ↓ 157 • 0 (cached) • 37 (reasoning)

codex_openai_server is intended for local use on a trusted machine or private network segment. It is not hardened as a public internet-facing multi-tenant service.

It is intended to look like a regular OpenAI endpoint to clients. The server exposes:

  • /v1/models
  • /v1/responses
  • /v1/chat/completions

Features:

  • async transport to the upstream Codex backend using httpx
  • streaming SSE support for Responses API and Chat Completions
  • tool call translation between Chat Completions and Codex Responses payloads
  • local API key protection for your compatibility server
  • optional Docker and Docker Compose local deployment

Requirements

  • Python 3.10+
  • Python 3.14 is the preferred version for local development
  • a local Codex CLI login with auth.json

This project assumes you already trust the clients that can reach it. There is no built-in rate limiting or request-size enforcement, so do not expose it directly to the public internet without adding your own edge controls.

The server reads Codex credentials from CODEX_HOME/auth.json. By default that resolves to your local .codex directory.

Stability and security posture

The package metadata currently classifies this project as alpha, and that is the right expectation for upgrades and automation around it. Keep version pins explicit if you depend on exact request or deployment behavior.

This server is meant for local use on a trusted machine or private network segment. Treat auth.json, your local compatibility API key, and any logged payloads as sensitive material. Routine bugs and feature requests can go through the issue tracker; suspected vulnerabilities should follow SECURITY.md instead of a public issue.

Installation

For a published install:

python -m pip install codex-openai-server

For local development, create your virtual environment with Python 3.14 if you have it available. The package and CI still target Python 3.10+ compatibility.

python -m pip install -e .[dev]

Local configuration

Create a local .env file from .env.example.

Required values:

  • OPENAI_COMPAT_API_KEY: bearer token clients must send to your local compatibility server

Optional values:

  • OPENAI_COMPAT_HOST
  • OPENAI_COMPAT_PORT
  • OPENAI_COMPAT_PUBLISHED_HOST
  • OPENAI_COMPAT_PUBLISHED_PORT
  • LOCAL_CODEX_HOME
  • OPENAI_COMPAT_LOG_LEVEL
  • OPENAI_COMPAT_LOG_FORMAT
  • OPENAI_COMPAT_DEBUG_LOGGING
  • OPENAI_COMPAT_LOG_PAYLOADS
  • OPENAI_COMPAT_LOG_UPSTREAM_BODY_LIMIT

Run locally

python -m codex_openai_server

Or use the installed console script:

codex_openai_server

Use with OpenAI clients

Point your client at http://127.0.0.1:8000/v1 and use the local API key you set in .env.

import openai

client = openai.OpenAI(
    api_key="your-local-server-key",
    base_url="http://127.0.0.1:8000/v1",
)

response = client.responses.create(
    model="gpt-5.5",
    input="Reply with exactly: hello",
)

print(response.output_text)

Use with Copilot CLI

For GPT-5 series models, configure the Copilot CLI to use the Responses wire API. The examples below use COPILOT_MODEL consistently for the model selection value.

$env:COPILOT_PROVIDER_TYPE = "openai"
$env:COPILOT_PROVIDER_BASE_URL = "http://127.0.0.1:8000/v1"
$env:COPILOT_PROVIDER_API_KEY = "your-local-server-key"
$env:COPILOT_MODEL = "gpt-5.5"
$env:COPILOT_PROVIDER_WIRE_API = "responses"

copilot -p "who are you?" --disable-builtin-mcps --allow-all-tools --stream on

Without COPILOT_PROVIDER_WIRE_API=responses, the Copilot CLI may default to the wrong wire format for GPT-5 models.

Logging

The server supports env-controlled proxy logging for debugging upstream compatibility issues.

OPENAI_COMPAT_LOG_LEVEL=INFO
OPENAI_COMPAT_LOG_FORMAT=text
OPENAI_COMPAT_DEBUG_LOGGING=false
OPENAI_COMPAT_LOG_PAYLOADS=false
OPENAI_COMPAT_LOG_UPSTREAM_BODY_LIMIT=4000

Set OPENAI_COMPAT_DEBUG_LOGGING=true to log request summaries for /v1/responses and /v1/chat/completions, plus upstream request and error diagnostics. Set OPENAI_COMPAT_LOG_FORMAT=json to emit structured JSON log lines for the codex_openai_server.* loggers. Set OPENAI_COMPAT_LOG_PAYLOADS=true only when you explicitly want full request bodies in logs. The proxy automatically retries a /responses call once with a forced auth refresh after upstream 401, 403, or 404 responses. If that retry succeeds, the upstream logger emits Upstream Codex request succeeded after auth refresh retry with structured fields including recovered_from_status_code and retried_with_fresh_auth=true. If upstream still returns 404 after the retry, the proxy surfaces it as a 502 because that failure is treated as an upstream/auth state problem rather than a client payload problem.

Docker Compose

The Compose setup mounts your local Codex auth directory read-only into the container. By default it pulls the published GHCR image for codex_openai_server pinned to the current release tag. The default auth mount path can use ~/.codex; on this Windows machine, docker compose config resolved that to the actual user home directory correctly.

Set LOCAL_CODEX_HOME in .env to your real Codex directory, for example:

LOCAL_CODEX_HOME=~/.codex

Then run:

docker compose up

If you want to override the published image version explicitly:

CODEX_OPENAI_SERVER_IMAGE_VERSION=v0.1.0

The default published-image tag is managed in the repo and updated by bumpver, so the default pull_policy is missing instead of always. That avoids re-pulling on every run while still giving you a version-pinned default.

If you want to build and use the image locally, use the tracked override file:

docker compose -f docker-compose.yaml -f docker-compose.local.yaml up --build

The repository Dockerfile defaults to Python 3.14 for local image builds. If you want to verify the minimum supported runtime explicitly, override the build arg, for example:

docker build --build-arg PYTHON_VERSION=3.10 -t codex_openai_server:py310 .

That override switches the image tag to codex_openai_server:local, sets pull_policy: never, bind-mounts the repository into /workspace, and runs uvicorn --reload with PYTHONPATH=/workspace so Python code changes are picked up without rebuilding the image.

The first run still needs --build so the local image exists. After that, you can usually use:

docker compose -f docker-compose.yaml -f docker-compose.local.yaml up

On Docker Desktop for Windows, the local override forces polling-based reloads so file changes inside the bind mount are detected reliably.

Release management

This project uses bumpver for version and tag management.

Preview the next patch release:

bumpver update --patch --dry --no-fetch

Create the version commit and vX.Y.Z tag locally with the direct bumpver flow:

bumpver update --patch --no-push

That version bump also updates the default published Docker tag used in Compose.

The GitHub Actions release workflows are set up to publish Python artifacts to PyPI and Docker images to GHCR from version tags. The Docker publish workflow also smoke-tests the built image before it pushes release tags.

For publish-facing changes, keep CHANGELOG.md, README.md, and package metadata in sync so PyPI and GitHub release surfaces tell the same story.

Development

Run checks locally:

pre-commit run --all-files
python -m pytest

Contributor workflow notes are in CONTRIBUTING.md.

Acknowledgements

Thanks to Simon Willison for the blog post A pelican for GPT-5.5 via the semi-official Codex backdoor API and for publishing llm-openai-via-codex, which helped inspire this OpenAI-compatible Codex proxy.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codex_openai_server-0.1.0.tar.gz (31.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

codex_openai_server-0.1.0-py3-none-any.whl (22.0 kB view details)

Uploaded Python 3

File details

Details for the file codex_openai_server-0.1.0.tar.gz.

File metadata

  • Download URL: codex_openai_server-0.1.0.tar.gz
  • Upload date:
  • Size: 31.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for codex_openai_server-0.1.0.tar.gz
Algorithm Hash digest
SHA256 414d4fd65eaf786597a94597e2592b7e4afd84db8ac0b0a4e7f61a1472a56a63
MD5 345dc5fb589c07192d22f9f90f50e0ce
BLAKE2b-256 502aa9e2d92af96a3502c2c068ad20ac84f1ceb1665df72e558e6a188e844dce

See more details on using hashes here.

Provenance

The following attestation bundles were made for codex_openai_server-0.1.0.tar.gz:

Publisher: publish_on_pypi.yml on joshuasundance-swca/codex_openai_server

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file codex_openai_server-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for codex_openai_server-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4d28c30cd53a60dafdde0a350ba1150a72b1dae5a172dca80766ff986f86f482
MD5 c733fe3877589d2b61f3d88e43ed3e74
BLAKE2b-256 72488352114687d3145b4b10312c2af78c6f2d53810e08bc2d9ba405aef094d0

See more details on using hashes here.

Provenance

The following attestation bundles were made for codex_openai_server-0.1.0-py3-none-any.whl:

Publisher: publish_on_pypi.yml on joshuasundance-swca/codex_openai_server

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page