Skip to main content

Codex auth helpers for pydantic-ai OpenAI Responses models.

Project description

codex-auth-helper

codex-auth-helper turns an existing local Codex auth session into either:

  • a pydantic-ai Responses model
  • a LangChain ChatOpenAI model pinned to the OpenAI Responses API

It reads ~/.codex/auth.json, refreshes access tokens when needed, builds Codex-specific OpenAI clients for the Responses endpoint, and returns either a ready-to-use CodexResponsesModel or a LangChain chat model.

What It Does

  • Reads tokens from ~/.codex/auth.json
  • Derives ChatGPT-Account-Id from the auth file or token claims
  • Refreshes expired access tokens with https://auth.openai.com/oauth/token
  • Writes refreshed tokens back to the auth file
  • Builds an OpenAI-compatible client pointed at https://chatgpt.com/backend-api/codex
  • Returns a pydantic-ai responses model that already applies the Codex backend requirements
  • Returns a LangChain ChatOpenAI model configured for the Responses API

The helper enforces two backend-specific behaviors for you:

  • openai_store=False
  • streamed responses even when pydantic-ai calls the non-streamed request() path

What It Does Not Do

  • It does not log you into Codex
  • It does not create ~/.codex/auth.json
  • It does not provide generic Chat Completions wiring
  • It does not replace pydantic-ai; it only provides a model/client factory

Install

uv add codex-auth-helper
pip install codex-auth-helper

For LangChain usage:

uv add "codex-auth-helper[langchain]"
pip install "codex-auth-helper[langchain]"

You also need an existing Codex auth session on the same machine:

~/.codex/auth.json

If you have not logged in yet:

codex login

Quick Start

from codex_auth_helper import create_codex_responses_model
from pydantic_ai import Agent

model = create_codex_responses_model(
    "gpt-5.4",
    instructions="You are a helpful coding assistant.",
)
agent = Agent(model)

result = agent.run_sync("Naber")
print(result.output)

LangChain Quick Start

from codex_auth_helper import create_codex_chat_openai
from langchain.agents import create_agent

graph = create_agent(
    model=create_codex_chat_openai(
        "gpt-5.4",
        instructions="You are a helpful coding assistant.",
    ),
    tools=[],
    name="codex-graph",
)

The LangChain helper returns langchain_openai.ChatOpenAI configured to:

  • use the Codex Responses endpoint
  • reuse local Codex auth state
  • keep use_responses_api=True
  • default to output_version="responses/v1"
  • require instructions= and pass it through to the Responses request

instructions is mandatory for create_codex_chat_openai(...). The helper does not provide an implicit system prompt for the LangChain path; callers must pass the behavior they want explicitly.

The same rule applies to create_codex_responses_model(...) on the Pydantic path. Pass the Codex system behavior to the helper directly instead of relying on a separate agent-level instruction just to seed the model.

Custom Auth Path

If you want to read a different auth file, pass a custom config:

from pathlib import Path

from codex_auth_helper import CodexAuthConfig, create_codex_responses_model

config = CodexAuthConfig(auth_path=Path("/tmp/codex-auth.json"))
model = create_codex_responses_model(
    "gpt-5.4",
    config=config,
    instructions="You are a helpful coding assistant.",
)

Passing Extra OpenAI Responses Settings

Additional OpenAIResponsesModelSettings can still be passed through. The helper keeps openai_store=False unless you explicitly override the model after construction.

from codex_auth_helper import create_codex_responses_model

model = create_codex_responses_model(
    "gpt-5.4",
    instructions="You are a helpful coding assistant.",
    settings={
        "openai_reasoning_summary": "concise",
    },
)

Lower-Level Client Factory

If you only want the authenticated OpenAI client, use create_codex_async_openai(...):

from codex_auth_helper import create_codex_async_openai

client = create_codex_async_openai()

This returns CodexAsyncOpenAI, a subclass of openai.AsyncOpenAI.

If you need the sync OpenAI client, use create_codex_openai(...).

Public API

from codex_auth_helper import (
    CodexAsyncOpenAI,
    CodexAuthConfig,
    CodexAuthState,
    CodexOpenAI,
    CodexAuthStore,
    CodexResponsesModel,
    CodexTokenManager,
    create_codex_async_openai,
    create_codex_chat_openai,
    create_codex_openai,
    create_codex_responses_model,
)

Errors

Typical failure modes:

  • Codex auth file was not found ... The machine is not logged into Codex yet.
  • Codex auth file ... does not contain valid JSON The auth file is corrupt or partially written.
  • ModelHTTPError ... Store must be set to false Means you are not using the helper-backed model instance.
  • ModelHTTPError ... Stream must be set to true Means you are not using CodexResponsesModel.

Package Notes

This package is intentionally small and focused:

  • auth file parsing
  • token refresh
  • Codex-specific OpenAI client wiring
  • pydantic-ai responses model factory
  • LangChain Responses-model factory

Documentation

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codex_auth_helper-0.9.3.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

codex_auth_helper-0.9.3-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file codex_auth_helper-0.9.3.tar.gz.

File metadata

  • Download URL: codex_auth_helper-0.9.3.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for codex_auth_helper-0.9.3.tar.gz
Algorithm Hash digest
SHA256 69b0757d0fc904d8b348c6c88bac8424feb39a9d9a380166c714a3e439d9e61e
MD5 698b067476e4005e08673849ffd86410
BLAKE2b-256 7235832da846fc44e235db30875b0acab43fb4c626d9927c80bd88d45bfc63d7

See more details on using hashes here.

File details

Details for the file codex_auth_helper-0.9.3-py3-none-any.whl.

File metadata

  • Download URL: codex_auth_helper-0.9.3-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for codex_auth_helper-0.9.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7ed5f84a9714ca7032ef83084e4e329e8d84357b2e850df371c4c89386edf606
MD5 d6e7da46d1c01385adde776106474bbd
BLAKE2b-256 79b0e15f3d6adcd8354a0c6d7b2b6faf05e3455e09f7804b3c5a80237c635d92

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page