Reusable CLI runtime primitives for provider-backed automation workflows
Project description
coding-cli-runtime
A Python library for orchestrating LLM coding agent CLIs — Claude Code, Codex, Gemini CLI, and GitHub Copilot.
These CLIs each have different invocation patterns, output formats, error
shapes, and timeout behaviors. This library normalizes all of that behind
a common CliRunRequest → CliRunResult contract, so your automation
code doesn't need provider-specific subprocess handling.
What it does (and why not just subprocess.run):
- Unified request/result types across all four CLIs
- Timeout enforcement with graceful process termination
- Provider-aware failure classification (retryable vs fatal)
- Built-in model catalog with defaults, reasoning levels, and capabilities
- Interactive session management for long-running generation tasks
- Zero runtime dependencies
Installation
pip install coding-cli-runtime
Requires Python 3.10+.
Examples
Execute a provider CLI
import asyncio
from pathlib import Path
from coding_cli_runtime import CliRunRequest, run_cli_command
request = CliRunRequest(
cmd_parts=("codex", "--model", "o4-mini", "--quiet", "exec", "fix the tests"),
cwd=Path("/tmp/my-project"),
timeout_seconds=120,
)
result = asyncio.run(run_cli_command(request))
print(result.returncode) # 0
print(result.error_code) # "none"
print(result.duration_seconds) # 14.2
print(result.stdout_text[:200])
Swap codex for claude, gemini, or copilot — the request/result
shape stays the same. A synchronous variant run_cli_command_sync is also
available.
Pick a model from the provider catalog
from coding_cli_runtime import get_provider_spec
codex = get_provider_spec("codex")
print(codex.default_model) # "gpt-5.3-codex"
print(codex.model_source) # "codex_cli_cache", "override", or "code"
for model in codex.models:
print(f" {model.name}: {model.description}")
The catalog covers all four providers — each with model names, reasoning levels, default settings, and visibility flags.
Model lists are resolved with a three-tier fallback:
- User override — drop a JSON file at
~/.config/coding-cli-runtime/providers/<provider>.jsonto use your own model list immediately, without waiting for a package update. - Live CLI cache — for Codex, the library reads
~/.codex/models_cache.json(auto-refreshed by the Codex CLI) when present. Other providers fall through because their CLIs don't expose a machine-readable model list. - Hardcoded fallback — the model list shipped with the package.
Override file format:
{
"default_model": "claude-sonnet-4-7",
"models": [
"claude-sonnet-4-7",
{
"name": "claude-opus-5",
"description": "Latest opus model",
"controls": [
{ "name": "effort", "kind": "choice", "choices": ["low", "high"], "default": "low" }
]
}
]
}
Set CODING_CLI_RUNTIME_CONFIG_DIR to change the config directory
(default: ~/.config/coding-cli-runtime).
Decide whether to retry a failed run
from coding_cli_runtime import classify_provider_failure
classification = classify_provider_failure(
provider="gemini",
stderr_text="429 Resource exhausted: rate limit exceeded",
)
if classification.retryable:
print(f"Retryable ({classification.category}) — will retry")
else:
print(f"Fatal ({classification.category}) — giving up")
Works for all four providers. Recognizes auth failures, rate limits, network transients, and other provider-specific error patterns.
Key types
| Type | Purpose |
|---|---|
CliRunRequest |
Command spec: cmd, cwd, env, timeout, stream paths |
CliRunResult |
Result: returncode, stdout/stderr, duration, error code |
ErrorCode |
none · spawn_failed · timed_out · non_zero_exit |
ProviderSpec |
Provider catalog entry with models, controls, defaults |
FailureClassification |
Classified error with retryable flag and category |
run_interactive_session() manages long-running CLI processes with
timeout enforcement, process-group cleanup, transcript mirroring, and
automatic retries. Only cmd_parts, cwd, stdin_text, and logger are
required — observability labels like job_name and phase_tag default to
sensible values so external callers don't need to invent them.
Prerequisites
This package does not bundle any CLI binaries or credentials. You must install and authenticate the relevant provider CLI yourself before using the execution helpers.
Status
Pre-1.0. API may change between minor versions.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file coding_cli_runtime-0.1.0.tar.gz.
File metadata
- Download URL: coding_cli_runtime-0.1.0.tar.gz
- Upload date:
- Size: 35.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2a61c529029c68182d4072494bba20d571e06a5c11e71f197f51772bf917c0bb
|
|
| MD5 |
c9980f4bd66e11d1c0ef02fc74d322f3
|
|
| BLAKE2b-256 |
85b23c5abc8ab9f58cafbc313fd043b58f6df5617ba131ffdc7ec91063a209f4
|
Provenance
The following attestation bundles were made for coding_cli_runtime-0.1.0.tar.gz:
Publisher:
publish-coding-cli-runtime.yml on pj-ms/llm-eval
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
coding_cli_runtime-0.1.0.tar.gz -
Subject digest:
2a61c529029c68182d4072494bba20d571e06a5c11e71f197f51772bf917c0bb - Sigstore transparency entry: 1249799291
- Sigstore integration time:
-
Permalink:
pj-ms/llm-eval@f28b59c4281b88a00353f799e9ec762a82222243 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/pj-ms
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-coding-cli-runtime.yml@f28b59c4281b88a00353f799e9ec762a82222243 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file coding_cli_runtime-0.1.0-py3-none-any.whl.
File metadata
- Download URL: coding_cli_runtime-0.1.0-py3-none-any.whl
- Upload date:
- Size: 31.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4dabe3eca2d0fcb8f9dab343fcdb63b89c14cd3c25a69b86a744ea65f31937f5
|
|
| MD5 |
58ba875263fba6685c71b7721c1f61be
|
|
| BLAKE2b-256 |
6de36bb091deb468ddcdce49f19f9ca1b3ab92d0a94328c2da8d9768e189abae
|
Provenance
The following attestation bundles were made for coding_cli_runtime-0.1.0-py3-none-any.whl:
Publisher:
publish-coding-cli-runtime.yml on pj-ms/llm-eval
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
coding_cli_runtime-0.1.0-py3-none-any.whl -
Subject digest:
4dabe3eca2d0fcb8f9dab343fcdb63b89c14cd3c25a69b86a744ea65f31937f5 - Sigstore transparency entry: 1249799407
- Sigstore integration time:
-
Permalink:
pj-ms/llm-eval@f28b59c4281b88a00353f799e9ec762a82222243 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/pj-ms
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-coding-cli-runtime.yml@f28b59c4281b88a00353f799e9ec762a82222243 -
Trigger Event:
workflow_dispatch
-
Statement type: