Python SDK for the Codex CLI agent with async threads, streaming events, and structured outputs
Project description
Embed the Codex agent in Python workflows. This SDK supports both the codex exec
JSONL path and the persistent app-server JSON-RPC path, and exposes structured,
typed results for each.
- Runtime dependency-free: uses only the Python standard library.
- Codex CLI binaries are downloaded separately; use
scripts/setup_binary.pyfrom the repo or install the Codex CLI and setcodex_path_override. - Async-first API with sync helpers, streaming events, and structured output.
- Python 3.8/3.9 support is deprecated and will be removed in a future release; use Python 3.10+.
- Install the SDK:
uv add codex-sdk-python
- Ensure a
codexbinary is available (required for local runs):
# From the repo source (downloads vendor binaries from the matching npm release)
python scripts/setup_binary.py
If you installed from PyPI, install the Codex CLI separately and either add it to your PATH
or pass CodexOptions.codex_path_override.
- Authenticate with Codex:
codex login
Or export an API key:
export CODEX_API_KEY="<your-api-key>"
- Run a first turn:
import asyncio
from codex_sdk import Codex
async def main() -> None:
codex = Codex()
thread = codex.start_thread()
turn = await thread.run("Diagnose the test failure and propose a fix")
print(turn.final_response)
print(turn.items)
if __name__ == "__main__":
asyncio.run(main())
For single-turn sessions with approval handling, use the turn session wrapper:
import asyncio
from codex_sdk import AppServerClient, AppServerOptions, ApprovalDecisions
async def main() -> None:
async with AppServerClient(AppServerOptions()) as app:
thread = await app.thread_start(model="gpt-5.4", cwd=".")
thread_id = thread["thread"]["id"]
session = await app.turn_session(
thread_id,
"Run tests and summarize failures.",
approvals=ApprovalDecisions(command_execution="accept"),
)
async for notification in session.notifications():
print(notification.method)
final_turn = await session.wait()
print(final_turn)
if __name__ == "__main__":
asyncio.run(main())
Examples
Try the examples under examples/:
python examples/basic_usage.py
python examples/streaming_example.py
python examples/thread_resume.py
python examples/app_server_basic.py
python examples/app_server_fork.py
python examples/app_server_requirements.py
python examples/app_server_skill_input.py
python examples/app_server_approvals.py
python examples/app_server_turn_session.py
python examples/config_overrides.py
python examples/hooks_streaming.py
python examples/notify_hook.py
| Feature Badge | Details |
|---|---|
Each Thread keeps context; resume by thread id or last session. |
|
run_streamed() yields structured events as they happen. |
|
ThreadHooks lets you react to streamed events inline. |
|
run_json() validates JSON output against a schema. |
|
run_pydantic() derives schema and validates with Pydantic v2. |
|
| Thread options map to Codex CLI sandbox and approval policies. | |
| Codex can act as a PydanticAI model or as a delegated tool. | |
Cancel running turns via AbortController and AbortSignal. |
|
| Optional spans if Logfire is installed and initialized. |
Installation extras
uv add "codex-sdk-python[pydantic]" # Pydantic v2 schema helpers
uv add "codex-sdk-python[pydantic-ai]" # PydanticAI integrations
uv add "codex-sdk-python[logfire]" # Optional tracing
Environment variables
CODEX_API_KEY=<api-key>
OPENAI_BASE_URL=https://api.openai.com/v1
CODEX_HOME=~/.codex
Notes:
CODEX_API_KEYis forwarded to thecodexprocess;CodexOptions.api_keyoverrides the environment.OPENAI_BASE_URLis set whenCodexOptions.base_urlis provided.CODEX_HOMEcontrols where sessions are stored and whereresume_last_thread()looks.
CodexOptions (client)
from codex_sdk import Codex, CodexOptions
codex = Codex(
CodexOptions(
codex_path_override="/path/to/codex",
base_url="https://api.openai.com/v1",
api_key="<key>",
env={"CUSTOM_ENV": "custom"},
config_overrides={
"analytics.enabled": True,
"notify": ["python3", "/path/to/notify.py"],
},
)
)
codex_path_override: use a custom CLI binary path.base_url: setsOPENAI_BASE_URLfor the child process.api_key: setsCODEX_API_KEYfor the child process.env: when set, replaces inherited environment variables; the SDK still injects required values.
ThreadOptions (per thread)
from codex_sdk import ThreadOptions
ThreadOptions(
model="gpt-5.4",
sandbox_mode="workspace-write",
working_directory="/path/to/project",
skip_git_repo_check=True,
model_reasoning_effort="medium",
model_instructions_file="/path/to/instructions.md",
model_personality="friendly",
max_threads=4,
network_access_enabled=True,
web_search_mode="cached",
shell_snapshot_enabled=True,
background_terminals_enabled=True,
apply_patch_freeform_enabled=False,
exec_policy_enabled=True,
remote_models_enabled=False,
collaboration_modes_enabled=True,
connectors_enabled=True,
responses_websockets_enabled=True,
request_compression_enabled=True,
approval_policy="granular",
approvals_reviewer="guardian_subagent",
additional_directories=["../shared"],
config_overrides={"analytics.enabled": True},
)
Important mappings to the Codex CLI:
sandbox_modemaps to--sandbox(read-only,workspace-write,danger-full-access).working_directorymaps to--cd.additional_directoriesmaps to repeated--add-dir.skip_git_repo_checkmaps to--skip-git-repo-check.model_reasoning_effortmaps to--config model_reasoning_effort=.... Typed SDK values arenone,minimal,low,medium,high,xhigh. In Codex itself, the presets exposed for--config model_reasoning_effort=...vary by the selected model/provider. For example, current frontier coding models typically exposelow,medium,high,xhigh, whilegpt-5.1-codex-miniexposesmediumandhigh.model_instructions_filemaps to--config model_instructions_file=....model_personalitymaps to--config model_personality=....max_threadsmaps to--config agents.max_threads=....network_access_enabledmaps to--config sandbox_workspace_write.network_access=....web_search_modemaps to--config web_search="disabled|cached|live".web_search_enabled/web_search_cached_enabledmap to--config web_search=...for legacy compatibility.shell_snapshot_enabledmaps to--config features.shell_snapshot=....background_terminals_enabledmaps to--config features.unified_exec=....apply_patch_freeform_enabledmaps to--config features.apply_patch_freeform=....exec_policy_enabledmaps to--config features.exec_policy=....remote_models_enabledmaps to--config features.remote_models=....collaboration_modes_enabledmaps to--config features.collaboration_modes=....connectors_enabledmaps to--config features.connectors=....responses_websockets_enabledmaps to--config features.responses_websockets=....request_compression_enabledmaps to--config features.enable_request_compression=....feature_overridesmaps to--config features.<key>=...(explicit options take precedence).approval_policymaps to--config approval_policy=...(never,on-request,on-failure,untrusted,granular).approvals_reviewermaps to--config approvals_reviewer=...for app-server-backed approval routing (user,guardian_subagent).config_overridesmaps to repeated--config key=valueentries.
Note: skills_enabled is deprecated in Codex 0.80+ (skills are always enabled).
Note: Codex defaults agents.max_threads to 6; max_threads must be >= 1 if set.
Note: Codex 0.88.0+ ignores experimental_instructions_file; use
model_instructions_file instead.
Feature overrides example:
ThreadOptions(
feature_overrides={
"web_search_cached": True,
"powershell_utf8": True,
}
)
App server (JSON-RPC)
For richer integrations (thread fork, requirements, explicit skill input), use the app-server protocol. The client handles the initialize/initialized handshake and gives you access to JSON-RPC notifications.
import asyncio
from codex_sdk import AppServerClient, AppServerOptions
async def main() -> None:
async with AppServerClient(AppServerOptions()) as app:
thread = await app.thread_start(model="gpt-5.4", cwd=".")
thread_id = thread["thread"]["id"]
await app.turn_start(
thread_id,
[
{"type": "text", "text": "Use $my-skill and summarize."},
{"type": "skill", "name": "my-skill", "path": "/path/to/SKILL.md"},
],
)
async for notification in app.notifications():
print(notification.method, notification.params)
if __name__ == "__main__":
asyncio.run(main())
Text inputs may include textElements with byteRange to preserve UI annotations in history.
The SDK also accepts text_elements/byte_range and normalizes them to camelCase.
Codex 0.86.0+ supports optional SKILL.toml metadata alongside SKILL.md. When present,
skills_list responses include an interface object (display name, icons, brand color,
default prompt) for richer UI integrations.
App-server convenience methods
The SDK also exposes helpers for most app-server endpoints:
- Threads:
thread_start,thread_resume,thread_fork,thread_list,thread_loaded_list,thread_read,thread_archive,thread_unsubscribe,thread_unarchive,thread_name_set,thread_compact_start,thread_rollback,thread_metadata_update - Config:
config_read,config_value_write,config_batch_write,config_requirements_read - Skills:
skills_list,skills_remote_list,skills_remote_export,skills_remote_read(alias),skills_remote_write(alias),skills_config_write - Turns/review:
turn_start,turn_steer,turn_interrupt,review_start,turn_session - Models:
model_list,experimental_feature_list - Collaboration modes:
collaboration_mode_list(experimental) - Plugins:
plugin_list,plugin_read,plugin_install,plugin_uninstall - Filesystem (experimental):
fs_copy,fs_create_directory,fs_get_metadata,fs_read_directory,fs_read_file,fs_remove,fs_write_file - One-off commands:
command_exec,command_exec_write,command_exec_resize,command_exec_terminate - MCP auth/status:
mcp_server_oauth_login,mcp_server_refresh,mcp_server_status_list - External agent config:
external_agent_config_detect,external_agent_config_import - Windows sandbox:
windows_sandbox_setup_start - Account:
account_login_start,account_login_cancel,account_logout,account_rate_limits_read,account_read - Feedback:
feedback_upload
These map 1:1 to the Codex app-server protocol; see codex/codex-rs/app-server/README.md
for payload shapes and event semantics.
Note: some endpoints and fields are gated behind an experimental capability; set
AppServerOptions(experimental_api_enabled=True) to opt in.
ApprovalDecisions also supports permissions_request for auto-responding to
item/permissions/requestApproval server requests during turn_session().
thread_list supports archived, sort_key, and source_kinds filters, and config_read accepts an optional cwd
to compute the effective layered config for a specific working directory.
Codex 0.115.0 also adds experimental granular approval routing (approval_policy="granular")
and guardian reviewer selection via approvals_reviewer; the app-server helpers pass those
through with the existing snake_case to camelCase normalization.
Observability (OTEL) and notify
Codex emits OTEL traces/logs/metrics when configured in ~/.codex/config.toml.
For headless runs (codex exec), set analytics.enabled=true and provide OTEL exporters
in the config file. You can also pass overrides with config_overrides.
CodexOptions(
config_overrides={
"analytics.enabled": True,
"notify": ["python3", "/path/to/notify.py"],
}
)
See examples/notify_hook.py for a ready-to-use notify script.
TurnOptions (per turn)
from codex_sdk import TurnOptions
TurnOptions(
output_schema={"type": "object", "properties": {"ok": {"type": "boolean"}}},
signal=controller.signal,
)
output_schemamust be a JSON object (mapping). The SDK writes it to a temp file and passes--output-schema.signalis anAbortSignalfor canceling an in-flight turn.
Bundled CLI binary and platform support
The SDK resolves a platform-specific Codex CLI binary under src/codex_sdk/vendor/<target>/codex/.
It selects the target triple based on OS and CPU and ensures the binary is executable on POSIX.
Supported target triples:
- Linux:
x86_64-unknown-linux-musl,aarch64-unknown-linux-musl - macOS:
x86_64-apple-darwin,aarch64-apple-darwin - Windows:
x86_64-pc-windows-msvc,aarch64-pc-windows-msvc
If you are working from source and the vendor directory is missing, run python scripts/setup_binary.py
or follow SETUP.md to download the official npm package and copy the vendor/ directory.
The SDK delegates authentication to the Codex CLI:
- Run
codex loginto create local credentials (stored under~/.codex/by the CLI). - Or set
CODEX_API_KEY(or passCodexOptions.api_key) for headless use. CodexOptions.base_urlsetsOPENAI_BASE_URLto target an OpenAI-compatible endpoint.
Basic run
from codex_sdk import Codex
codex = Codex()
thread = codex.start_thread()
turn = await thread.run("Summarize the repository")
print(turn.final_response)
Sync helpers (non-async)
from pydantic import BaseModel
class RepoStatus(BaseModel):
summary: str
turn = thread.run_sync("Summarize the repository")
parsed = thread.run_json_sync("Summarize", output_schema={"type": "object"})
validated = thread.run_pydantic_sync("Summarize", output_model=RepoStatus)
Note: sync helpers raise CodexError if called from an active event loop.
Streaming events
result = await thread.run_streamed("Diagnose the test failure")
async for event in result.events:
if event.type == "item.completed":
print(event.item.type)
elif event.type == "turn.completed":
print(event.usage)
To iterate directly without the wrapper:
async for event in thread.run_streamed_events("Diagnose the test failure"):
print(event.type)
Hooks for streamed events
Use ThreadHooks to react to events without manually wiring an event loop.
from codex_sdk import ThreadHooks
hooks = ThreadHooks(
on_event=lambda event: print("event", event.type),
on_item_type={
"command_execution": lambda item: print("command", item.command),
},
)
turn = await thread.run_with_hooks("Run the tests and summarize failures.", hooks=hooks)
print(turn.final_response)
Event types (ThreadEvent)
thread.startedturn.startedturn.completed(includes token usage)turn.faileditem.starteditem.updateditem.completederror
Item types (ThreadItem)
agent_messagereasoningcommand_executionfile_changemcp_tool_callcollab_tool_callweb_searchtodo_listerror
Structured output (JSON schema)
schema = {
"type": "object",
"properties": {
"summary": {"type": "string"},
"status": {"type": "string", "enum": ["ok", "action_required"]},
},
"required": ["summary", "status"],
"additionalProperties": False,
}
result = await thread.run_json("Summarize repository status", output_schema=schema)
print(result.output)
Pydantic output validation
from pydantic import BaseModel
class RepoStatus(BaseModel):
summary: str
status: str
result = await thread.run_pydantic("Summarize repository status", output_model=RepoStatus)
print(result.output)
Images + text
turn = await thread.run(
[
{"type": "text", "text": "Describe these screenshots"},
{"type": "local_image", "path": "./ui.png"},
{"type": "text", "text": "Focus on failures"},
{"type": "local_image", "path": "./diagram.jpg"},
]
)
Abort a running turn
import asyncio
from codex_sdk import AbortController, TurnOptions
controller = AbortController()
options = TurnOptions(signal=controller.signal)
task = asyncio.create_task(thread.run("Long task", options))
controller.abort("user requested cancel")
await task
Thread resume helpers
from codex_sdk import Codex
codex = Codex()
thread = codex.resume_thread("<thread-id>")
# Or resume the most recent session (uses CODEX_HOME or ~/.codex)
last_thread = codex.resume_last_thread()
Turn helpers
Each Turn provides convenience filters: agent_messages(), reasoning(), commands(),
file_changes(), mcp_tool_calls(), collab_tool_calls(), web_searches(),
todo_lists(), and errors().
Core classes:
Codex:start_thread(),resume_thread(),resume_last_thread().Thread:run(),run_streamed(),run_streamed_events(),run_json(),run_pydantic(), plusrun_sync(),run_json_sync(),run_pydantic_sync().Turn:items,final_response,usage, and helper filters.AppServerClient,AppServerTurnSession,ApprovalDecisionsfor app-server integrations.ThreadHooksfor event callbacks.CodexOptions,ThreadOptions,TurnOptions.AbortController,AbortSignal.
Exceptions:
CodexError,CodexCLIError,CodexParseError,CodexAbortError,TurnFailedError.
Typed events and items:
ThreadEventunion ofthread.*,turn.*,item.*, anderrorevents.ThreadItemunion ofagent_message,reasoning,command_execution,file_change,mcp_tool_call,collab_tool_call,web_search,todo_list,error.CollabToolCallItem: typed item forcollab_tool_callthread history entries.CollabToolCallStatus: typed status values for collaboration tool calls.CollabTool: collaboration tool metadata attached toCollabToolCallItem.CollabAgentStatus: agent lifecycle status attached to collaboration state updates.CollabAgentState: agent metadata/state payload emitted for collaboration items.
Example scripts under examples/:
basic_usage.py: minimalCodex+Threadusage.streaming_example.py: live event streaming.structured_output.py: JSON schema output parsing.thread_resume.py: resume withCODEX_THREAD_ID.permission_levels_example.py: sandbox modes and working directory.model_configuration_example.py: model selection and endpoint config.app_server_turn_session.py: approval-handled turns over app-server.hooks_streaming.py: event hooks for streaming runs.notify_hook.py: notify script for CLI callbacks.pydantic_ai_model_provider.py: Codex as a PydanticAI model provider.pydantic_ai_model_provider_streaming.py: live PydanticAI text streaming overCodexModel.pydantic_ai_handoff.py: Codex as a PydanticAI tool.
The SDK forwards sandbox and approval controls directly to codex exec.
read-only: can read files and run safe commands, no writes.workspace-write: can write inside the working directory and added directories.danger-full-access: unrestricted (use with caution).
Additional controls:
working_directory: restricts where the CLI starts and what it can access.additional_directories: allowlist extra folders when usingworkspace-write.approval_policy:never,on-request,on-failure,untrusted,granular.approvals_reviewer:user,guardian_subagentfor app-server approval routing.network_access_enabled: toggles network access in workspace-write sandbox.web_search_mode: toggles web search (disabled,cached,live).
This SDK offers two ways to integrate with PydanticAI:
1) Codex as a PydanticAI model provider
Use CodexModel to delegate tool-call planning and text generation to Codex, while PydanticAI executes tools and validates outputs.
from pydantic_ai import Agent, Tool
from codex_sdk.integrations.pydantic_ai_model import CodexModel
from codex_sdk.options import ThreadOptions
def add(a: int, b: int) -> int:
return a + b
model = CodexModel(
thread_options=ThreadOptions(
model="gpt-5.4",
sandbox_mode="read-only",
skip_git_repo_check=True,
)
)
agent = Agent(model, tools=[Tool(add)])
result = agent.run_sync("What's 19 + 23? Use the add tool.")
print(result.output)
For live text streaming in a terminal or web UI:
from pydantic_ai import Agent
from codex_sdk.integrations.pydantic_ai_model import CodexModel
agent = Agent(CodexModel(), output_type=str)
async with agent.run_stream("Explain why the sky is blue.") as result:
async for delta in result.stream_text(delta=True, debounce_by=None):
print(delta, end="", flush=True)
How it works:
CodexModelbuilds a JSON schema envelope withtool_callsandfinal.- Codex emits tool calls as JSON strings; PydanticAI runs them.
- If
allow_text_outputis true, Codex can place final text infinal. - This SDK targets the current PydanticAI release line (
>=1.68.0,<2). Agent.run_stream(),Agent.run_stream_events(), andAgent.run_stream_sync()work withCodexModel.- Text deltas are forwarded live from agent-message updates when Codex emits them, including
envelope-backed
finaltext. Tool calls are forwarded as soon as Codex produces a valid envelope update, andstreamed.get()is reconciled to the canonical final turn result.
Safety defaults (you can override with your own ThreadOptions):
sandbox_mode="read-only"skip_git_repo_check=Trueapproval_policy="never"web_search_mode="disabled"network_access_enabled=False
2) Codex as a PydanticAI tool (handoff)
Register Codex as a tool and let a PydanticAI agent decide when to delegate tasks.
from pydantic_ai import Agent
from codex_sdk import ThreadOptions
from codex_sdk.integrations.pydantic_ai import codex_handoff_tool
tool = codex_handoff_tool(
thread_options=ThreadOptions(
sandbox_mode="workspace-write",
skip_git_repo_check=True,
working_directory=".",
),
include_items=True,
items_limit=20,
)
agent = Agent(
"openai:gpt-5.4",
tools=[tool],
system_prompt=(
"You can delegate implementation details to the codex_handoff tool. "
"Use it for repository-aware edits, command execution, or patches."
),
)
result = await agent.run(
"Use the codex_handoff tool to scan this repository and suggest one small DX improvement."
)
print(result.output)
Handoff options:
persist_thread: keep a single Codex thread across tool calls (default true).include_items: include a summarized item list in tool output.items_limit: cap the number of items returned.include_usage: include token usage.timeout_seconds: wrap the run inasyncio.wait_for.
If logfire is installed and initialized, the SDK emits spans:
codex_sdk.execcodex_sdk.thread.turncodex_sdk.pydantic_ai.model_requestcodex_sdk.pydantic_ai.handoff
If Logfire is missing or not initialized, the span context manager is a no-op.
Transport split
The SDK still ships two separate transports. The Thread API runs through
codex exec, while app-server-backed integrations use codex app-server
directly and do not fall back to codex exec --experimental-json.
flowchart LR
subgraph App[Your Python App]
U[User Code]
T[Thread API]
M[CodexModel / AppServerClient]
end
subgraph ThreadTransport[Thread transport]
C[Codex / Thread]
E[CodexExec]
P[Event Parser]
X["codex exec --experimental-json"]
end
subgraph AppServerTransport[App-server transport]
A[App-server client]
S["codex app-server"]
end
FS[(Filesystem)]
NET[(Network)]
U --> T --> C --> E --> X
U --> M --> A --> S
X -->|JSONL events| P --> T
S -->|JSON-RPC notifications| A --> M
X --> FS
X --> NET
S --> FS
S --> NET
Streaming event lifecycle
sequenceDiagram
participant Dev as Developer
participant Thread as Thread.run_streamed()
participant Exec as CodexExec
participant CLI as codex exec
Dev->>Thread: run_streamed(prompt)
Thread->>Exec: spawn CLI with flags
Exec->>CLI: stdin prompt
CLI-->>Exec: JSONL line
Exec-->>Thread: raw line
Thread-->>Dev: ThreadEvent
CLI-->>Exec: JSONL line
Exec-->>Thread: raw line
Thread-->>Dev: ThreadEvent
CLI-->>Exec: exit code
Exec-->>Thread: completion
Thread-->>Dev: turn.completed / turn.failed
PydanticAI model-provider loop
sequenceDiagram
participant Agent as PydanticAI Agent
participant Model as CodexModel
participant App as AppServerClient
participant CLI as codex app-server
participant Tools as User Tools
Agent->>Model: request(messages, tools)
alt no cached thread id
Model->>App: thread_start(...)
App->>CLI: thread/start over JSON-RPC
CLI-->>App: thread metadata
App-->>Model: thread.id
else cached thread id
Model-->>Model: reuse cached thread id
end
Model->>App: turn_session(input, approvals)
App->>CLI: turn/start over JSON-RPC
CLI-->>App: item/updated + turn/completed
App-->>Model: notifications + final turn
alt tool calls emitted
Model-->>Agent: ToolCallPart(s) / tool deltas
Agent->>Tools: execute tool(s)
Tools-->>Agent: results
else final text allowed
Model-->>Agent: TextPart(final) / text deltas
end
PydanticAI handoff tool
flowchart LR
Agent[PydanticAI Agent] --> Tool[codex_handoff_tool]
Tool --> SDK[Codex SDK Thread]
SDK --> CLI[Codex CLI]
CLI --> SDK
SDK --> Tool
Tool --> Agent
This repo uses unit tests with mocked CLI processes to keep the test suite fast and deterministic.
Test focus areas:
tests/test_exec.py: CLI invocation, environment handling, config flags, abort behavior.tests/test_thread.py: parsing, streaming, JSON schema, Pydantic validation, input normalization.tests/test_codex.py: resume helpers and option wiring.tests/test_abort.py: abort signal semantics.tests/test_telemetry.py: Logfire span behavior.tests/test_pydantic_ai_*: PydanticAI model provider and handoff integration.
Run tests
uv sync
uv run pytest
Note: PydanticAI tests are skipped unless pydantic-ai is installed.
Coverage
uv run pytest --cov=codex_sdk
Coverage is configured in pyproject.toml with fail_under = 95.
Upgrade checklist
For SDK release updates, follow UPGRADE_CHECKLIST.md.
Run CI checks before push
Install the repo-managed pre-push hook in your local clone:
python scripts/install_git_hooks.py
That hook runs the same local lint, type, vendor verification, and test stack before allowing a push:
python scripts/run_ci_checks.py
If you need to bypass it for a single push, set SKIP_PRE_PUSH_CI=1.
Format and lint
uv run black src tests
uv run isort src tests
uv run flake8 src tests
Type checking
uv run mypy src
This repository includes GitHub Actions workflows under .github/workflows/.
The CI pipeline runs linting, type checks, binary setup, and
pytest --cov=codex_sdk. For local parity, the checked-in pre-push hook runs
the same lint/type/test gates plus a non-mutating vendor verification step after
you install it with python scripts/install_git_hooks.py.
Release automation creates GitHub releases from CHANGELOG_SDK.md when you push a
vX.Y.Z tag or manually dispatch the workflow, then the publish workflow uploads
the package to PyPI on release publish.
- Sessions are stored under
~/.codex/sessions(orCODEX_HOME). - Use
resume_thread(thread_id)to continue a known session. - Use
resume_last_thread()to pick the most recent session automatically. - Clean up stale sessions by removing old
rollout-*.jsonlfiles if needed.
- Codex CLI exited non-zero: Catch
CodexCLIErrorand inspect.stderr. - Unknown event type:
CodexParseErrormeans the CLI emitted an unexpected JSONL entry. - Turn failed:
TurnFailedErrorindicates aturn.failedevent. - Run canceled:
CodexAbortErrorindicates a triggeredAbortSignal. - No thread id: Ensure a
thread.startedevent is emitted before resuming.
- Prefer
read-onlyorworkspace-writesandboxes in production. - Set
working_directoryto a repo root and keepskip_git_repo_check=Falsewhere possible. - Configure
approval_policyfor any tool execution requiring user consent. - Disable
web_search_modeandnetwork_access_enabledunless explicitly needed.
Apache-2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file codex_sdk_python-0.115.0.tar.gz.
File metadata
- Download URL: codex_sdk_python-0.115.0.tar.gz
- Upload date:
- Size: 67.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fc04c8d6d17a5b9c99b7898ba504a635a3bd827c454ce4516f28c3270d0ccd8b
|
|
| MD5 |
d5182ddf93e6ee5c5547f4cf72806640
|
|
| BLAKE2b-256 |
e56ca71de7b61f9da34331f0ceebf92da570541e610ee3cdf5af2e0f2caf66c3
|
File details
Details for the file codex_sdk_python-0.115.0-py3-none-any.whl.
File metadata
- Download URL: codex_sdk_python-0.115.0-py3-none-any.whl
- Upload date:
- Size: 64.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
39c7b5d3650a39786f1c67c32b19349b24d55ff1de57259a014b8cf173b49b46
|
|
| MD5 |
b276a49be3a20df067c6e6e036965cd0
|
|
| BLAKE2b-256 |
a43e4214f7ddf9c86ae2be7d5eae3a1c3d4c2189e570646b1e7aa7905434154f
|