Skip to main content

OpenAI-compatible local API server backed by Codex credentials

Project description

OpenAI API Server via Codex

Use Codex from OpenAI-compatible clients, agents, and scripts.

Run a local /v1 server backed by your own Codex login:

$ uvx openai-api-server-via-codex

Point openai-python or other OpenAI-compatible tools at http://127.0.0.1:18080/v1. Supports Responses and Chat Completions, including streaming.

Why use this

  • reuse existing OpenAI SDK integrations with Codex
  • run local prototypes and agents without client-side rewrites
  • use your ChatGPT plan's Codex access in personal or trusted dev workflows

This project is a compatibility layer, not a replacement for the official OpenAI Platform API. It does not bypass Codex or ChatGPT plan limits, and it is not intended for reselling access, powering third-party services, or exposing a public API backed by your ChatGPT account.

[!IMPORTANT] This is not the official OpenAI Platform API. Use it only with accounts and subscriptions you are allowed to use, and follow OpenAI's terms and usage policies. Do not share your Codex credentials or use this server to provide access to other people.

Usage

Start with uvx

If Codex is already logged in on the machine, start the server with one command:

$ uvx openai-api-server-via-codex
Codex auth preflight OK: /home/you/.codex/auth.json (account_id_present=True)
INFO:     Uvicorn running on http://127.0.0.1:18080 (Press CTRL+C to quit)

The default server URL is http://127.0.0.1:18080. OpenAI-compatible API endpoints are served under /v1, for example http://127.0.0.1:18080/v1/responses.

[!TIP] uvx is uv's tool-run command. If you do not have uv installed yet, follow the official uv documentation: https://docs.astral.sh/uv/.

To force uvx to use the latest published package instead of a cached copy, run uvx --refresh-package openai-api-server-via-codex openai-api-server-via-codex.

[!NOTE] This is a compatibility server for local or trusted environments. By default it does not authenticate incoming requests. Set --api-key when binding to anything other than localhost.

Call the Responses API

Point openai-python at the local server:

from openai import OpenAI

client = OpenAI(api_key="dummy", base_url="http://127.0.0.1:18080/v1")

response = client.responses.create(
    model="gpt-5.5",
    input="Reply in one sentence.",
    reasoning={"effort": "low"},
)
print(response.output_text)

api_key="dummy" is only a placeholder required by the OpenAI SDK. The local server ignores incoming API keys unless you configure --api-key.

Use chat completions

chat = client.chat.completions.create(
    model="gpt-5.5",
    messages=[{"role": "user", "content": "Hello"}],
    reasoning_effort="low",
)
print(chat.choices[0].message.content)

Stream a response

stream = client.responses.create(
    model="gpt-5.5",
    input="Stream a short reply.",
    stream=True,
    reasoning={"effort": "low"},
)

for event in stream:
    if event.type == "response.output_text.delta":
        print(event.delta, end="")

Run as a background daemon

$ uvx openai-api-server-via-codex start
Codex auth preflight OK: /home/you/.codex/auth.json (account_id_present=True)
Started openai-api-server-via-codex on 127.0.0.1:18080
PID: 12345
PID file: /home/you/.config/openai-api-server-via-codex/run/server-127.0.0.1-18080.pid
Log file: /home/you/.config/openai-api-server-via-codex/run/server-127.0.0.1-18080.log

$ uvx openai-api-server-via-codex status
$ uvx openai-api-server-via-codex stop

Expose the server to other machines only with access control:

$ uvx openai-api-server-via-codex start \
  --host 0.0.0.0 \
  --api-key local-secret

Then connect clients to http://<server-host>:18080/v1 and pass api_key="local-secret" to the OpenAI client.

Installation options

Run without installing:

$ uvx openai-api-server-via-codex

Install the command onto your standard user tool path:

$ uv tool install openai-api-server-via-codex
$ openai-api-server-via-codex --help

Upgrade an installed tool:

$ uv tool upgrade openai-api-server-via-codex
$ openai-api-server-via-codex --version

For development from this checkout:

$ uv sync --dev
$ uv run openai-api-server-via-codex --help

Requirements

  • Python 3.11+
  • uv
  • A working Codex login, usually at ~/.codex/auth.json

Use an explicit Codex auth file when needed:

$ uvx openai-api-server-via-codex --auth-json ~/.codex/auth.json
$ OPENAI_VIA_CODEX_AUTH_JSON=~/.codex/auth.json uvx openai-api-server-via-codex

serve and start validate the Codex auth file before starting. If the file is missing, not valid JSON, not a ChatGPT Codex auth file, missing tokens, expired without a refresh token, or fails token refresh, the server exits before it binds the HTTP port.

[!NOTE] The incoming OpenAI-compatible API key and the Codex auth file are separate. --api-key protects this local server. --auth-json selects the Codex credentials used by the server when it calls the Codex backend.

Disclaimer

Use this project at your own risk. It is not the official OpenAI Platform API and is not endorsed or supported by OpenAI. It forwards requests to the Codex HTTP backend used by the Codex CLI and ChatGPT subscription flow instead of api.openai.com.

For reference, Simon Willison describes this route as a semi-official OpenAI Codex backdoor API. That matches this project's practical model: it uses the ChatGPT/Codex backend available through your own logged-in Codex credentials, and that backend may change without notice.

Use this server only with accounts and subscriptions you are allowed to use. Do not use it to evade limits, share account access, resell access, or power third-party services. Do not expose it to untrusted networks without --api-key or another access control layer, and follow OpenAI's Terms of Use and Usage Policies.

API endpoints

Method Path
GET /healthz
GET /v1/models
POST /v1/responses
GET /v1/responses/{response_id}
DELETE /v1/responses/{response_id}
POST /v1/responses/{response_id}/cancel
POST /v1/responses/input_tokens
POST /v1/chat/completions
GET /v1/chat/completions
GET /v1/chat/completions/{completion_id}
POST /v1/chat/completions/{completion_id}
DELETE /v1/chat/completions/{completion_id}
GET /v1/chat/completions/{completion_id}/messages

Compatibility

The server supports both sync and async openai-python clients for the main OpenAI APIs:

  • client.responses.create(...)
  • client.chat.completions.create(...)

Supported behavior includes:

  • stream=True for Responses and Chat Completions
  • previous_response_id for Responses, backed by local in-memory context
  • standard Chat Completions multi-turn through the messages list
  • function and tool calling, including streaming tool-call arguments
  • JSON mode and structured outputs
  • URL and data URL image parts
  • reasoning effort fields where the selected model accepts them
  • stored Chat Completions compatibility APIs backed by local in-memory storage

For Codex compatibility, backend requests are normalized to streaming Responses calls with store=false, low text verbosity by default, automatic tool choice defaults, and reasoning.encrypted_content included for reasoning context. Public store=true behavior is implemented locally.

[!NOTE] Model listing is best-effort because the upstream Codex HTTP model catalog can differ from the models that a subscription can actually run. As of 2026-05-06, with a ChatGPT Pro subscription, gpt-5.3-codex-spark did not appear in GET /v1/models in our live test, but direct requests using model="gpt-5.3-codex-spark" succeeded. OpenAI also describes GPT-5.3-Codex-Spark as a research preview for ChatGPT Pro users.

Configuration

Generate a default config file:

$ uvx openai-api-server-via-codex config-generate
$ uvx openai-api-server-via-codex config-generate --stdout

The default config path is:

$XDG_CONFIG_HOME/openai-api-server-via-codex/config.toml

If XDG_CONFIG_HOME is unset, this becomes:

~/.config/openai-api-server-via-codex/config.toml

You can also set OPENAI_VIA_CODEX_CONFIG or pass --config to serve, start, stop, and status.

Resolution order is:

CLI flag -> environment variable -> config file -> default

Example config:

[server]
host = "127.0.0.1"
port = 18080
default_model = "gpt-5.5"
timeout = 300.0
verbose = false
max_stored_items = 1000
max_concurrent_requests = 10
# api_key = "change-me"

[codex]
auth_json = "~/.codex/auth.json"
backend_base_url = "https://chatgpt.com/backend-api/codex"
client_version = "1.0.0"

[daemon]
state_dir = "~/.config/openai-api-server-via-codex/run"
# pid_file = "/path/to/openai-api-server-via-codex.pid"
# log_file = "/path/to/openai-api-server-via-codex.log"
stop_timeout = 10.0

server.host

Default: 127.0.0.1

$ uvx openai-api-server-via-codex --host 0.0.0.0

[!IMPORTANT] If you bind to 0.0.0.0, set --api-key or put the server behind another trusted access-control layer. Otherwise anyone who can reach the port can use your Codex credentials through this server.

server.port

Default: 18080

$ uvx openai-api-server-via-codex --port 18080

server.api_key

Default: unset

When unset, incoming Authorization headers are accepted and ignored.

When set, /v1/... routes require:

Authorization: Bearer <api_key>

/healthz remains unauthenticated.

$ uvx openai-api-server-via-codex --api-key local-secret
$ OPENAI_VIA_CODEX_API_KEY=local-secret uvx openai-api-server-via-codex

start passes the API key to the background serve process through the child environment, not through the child command-line arguments.

server.max_stored_items

Default: 1000

This bounds the in-memory stores used for Responses context and stored Chat Completions compatibility. Older entries are evicted first.

Set 0 to disable these stores. That also disables local previous_response_id chaining and stored-object retrieval.

server.max_concurrent_requests

Default: 10

This bounds concurrent Codex backend calls. Streaming responses hold a slot until the stream ends.

Set 0 to remove the local concurrency cap.

server.timeout

Default: 300.0

Timeout in seconds for Codex backend calls.

server.verbose

Default: false

Verbose mode enables debug-level uvicorn logs and application diagnostics:

  • resolved settings
  • request start/end status and latency
  • endpoint-level summaries
  • model-list fallback reasons
  • Codex HTTP stream/auth activity

Raw auth tokens are not logged. Token-like values in upstream errors or query strings are redacted to a short prefix plus ******.

$ uvx openai-api-server-via-codex --verbose
$ uvx openai-api-server-via-codex status --verbose
$ uvx openai-api-server-via-codex stop --verbose

codex.auth_json

Default: ~/.codex/auth.json

Selects the Codex ChatGPT OAuth credentials that the server borrows when it calls the Codex backend.

daemon.state_dir

Default:

~/.config/openai-api-server-via-codex/run

start, stop, and status resolve PID and log paths from this directory by default. The default PID/log stem is derived from host and port.

If stop or status is run without --host and the exact default PID file is missing, the command looks for a single PID file matching the selected port. If multiple matches exist, it refuses to guess and asks for --host or --pid-file.

Recipes

Require an API key

$ uvx openai-api-server-via-codex --api-key local-secret
from openai import OpenAI

client = OpenAI(
    api_key="local-secret",
    base_url="http://127.0.0.1:18080/v1",
)

Start on all interfaces

$ uvx openai-api-server-via-codex start \
  --host 0.0.0.0 \
  --port 18080 \
  --api-key local-secret \
  --verbose

Use a custom config

$ uvx openai-api-server-via-codex config-generate --config ./config.toml
$ uvx openai-api-server-via-codex --config ./config.toml

Use Chat Completions streaming

stream = client.chat.completions.create(
    model="gpt-5.5",
    messages=[{"role": "user", "content": "Stream a short reply."}],
    stream=True,
    reasoning_effort="low",
)

for chunk in stream:
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Send image input

response = client.responses.create(
    model="gpt-5.5",
    input=[
        {
            "role": "user",
            "content": [
                {"type": "input_text", "text": "Describe this image."},
                {
                    "type": "input_image",
                    "image_url": "data:image/png;base64,...",
                },
            ],
        }
    ],
)

Use tool calling

response = client.chat.completions.create(
    model="gpt-5.5",
    messages=[{"role": "user", "content": "What is the weather in Tokyo?"}],
    tools=[
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get weather for a city.",
                "parameters": {
                    "type": "object",
                    "properties": {"city": {"type": "string"}},
                    "required": ["city"],
                },
            },
        }
    ],
)

Development

Run the full local validation suite:

$ uv run tox

Run focused tests while changing request/response compatibility:

$ uv run python -m pytest tests/test_openai_compat_server.py -q
$ uv run ruff check .
$ uv run ty check

Run live Codex integration tests only when real network/auth testing is intended:

$ RUN_CODEX_LIVE_TESTS=1 uv run python -m pytest tests/test_live_integration.py -q
$ RUN_CODEX_LIVE_TESTS=1 uv run python -m pytest tests/test_live_codex_http_compatibility.py -q -s

The live tests use the machine's existing Codex credentials and make real model requests.

Release

The package is released to PyPI through GitHub Actions Trusted Publishing. Use the release checklist in docs/release.md.

The recommended production path is PyPI Trusted Publishing from GitHub Actions with the pypi environment. Local release work should build, inspect, and smoke test the artifacts before the tag is pushed.

License

Apache License 2.0. See LICENSE.

Acknowledgements

  • Simon Willison's article, A pelican for GPT-5.5 via the semi-official Codex backdoor API, and the implementation described there were the key references for this project. Without that article, this approach likely would not have been implemented here. Thank you to Simon for documenting the route clearly.
  • OpenClaw was a useful reference for understanding Codex backend integration patterns.
  • Pi Monorepo was a useful reference for Codex backend API behavior and compatibility details.

Author

Yuichi Tateno

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_api_server_via_codex-0.0.3.tar.gz (44.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openai_api_server_via_codex-0.0.3-py3-none-any.whl (41.9 kB view details)

Uploaded Python 3

File details

Details for the file openai_api_server_via_codex-0.0.3.tar.gz.

File metadata

File hashes

Hashes for openai_api_server_via_codex-0.0.3.tar.gz
Algorithm Hash digest
SHA256 d1dae90f3ee51883b310700843b4ca3745a0e48e44fd297e02116c676204c245
MD5 cf6746a798f6e981d85579f2c5535c40
BLAKE2b-256 f3f60ee157ea36ed9ba4cf9a0387c34cb073bfc139e8a190c73c7333c53ce9f0

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_api_server_via_codex-0.0.3.tar.gz:

Publisher: release.yml on hotchpotch/openai-api-server-via-codex

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file openai_api_server_via_codex-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_api_server_via_codex-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 20930639279a4dbd30a797e4435a2b2b1e025908316b61dadc077cc2a39da1b1
MD5 66863e1beb30f24af6f1dad55a9a59f6
BLAKE2b-256 8453a599fea90c8cfc69ca52f0605a1c3b4f90b830b731f05d1ebf17370d5f10

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_api_server_via_codex-0.0.3-py3-none-any.whl:

Publisher: release.yml on hotchpotch/openai-api-server-via-codex

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page