Skip to main content

Local backend bridge for Claude Code and Codex.

Project description

CCL — Claude Codex Local

License: MIT Python 3.10+ CI Code style: ruff

Hit your limit? Need privacy? Just swap the model.

One alias. Claude Code or Codex on a local model. Skills, agents, MCP servers — all intact.

Quota hit mid-session? cc keeps you going on a local model, no context lost. Code that can't leave your machine? Everything runs offline after model download. Don't want to rewire your workflow? Your ~/.claude, skills, agents, and MCP servers carry over untouched.

Get Started → · Landing page →


Features

Feature What you get
Ollama first-class ollama launch — no duplicated config, no custom Modelfiles
Config untouched All skills, statusline, agents, plugins, and MCP servers carry over
Smart model selection llmfit analyses your hardware and picks the best quantization that fits (optional — wizard prompts to install only when needed)
Resume on failure Wizard persists progress — --resume picks up from the last completed step
Idempotent aliases Re-running the wizard replaces the existing alias block, never appends
Cloud fallback Run claude / codex directly (no prefix) to switch back instantly

Quick Start

Install from PyPI (recommended)

pip install claude-codex-local

Or with uv:

uv tool install claude-codex-local

Then run the setup wizard:

ccl

One-command install (no clone required)

bash <(curl -sSL https://raw.githubusercontent.com/luongnv89/claude-codex-local/main/install.sh)

Or with wget:

bash <(wget -qO- https://raw.githubusercontent.com/luongnv89/claude-codex-local/main/install.sh)

Use bash <(...), not curl … | bash. The wizard is interactive and needs a real TTY — piping steals stdin.

Override defaults with env vars:

CCL_REF=v0.8.0 CCL_INSTALL_DIR=~/tools/claude-codex-local \
  bash <(curl -sSL https://raw.githubusercontent.com/luongnv89/claude-codex-local/main/install.sh)

Install from a clone

git clone https://github.com/luongnv89/claude-codex-local.git
cd claude-codex-local
python3 -m venv .venv && source .venv/bin/activate
pip install -e .
ccl

After setup

Reload your shell so the alias is available:

source ~/.zshrc   # or source ~/.bashrc

Then run:

cc        # Claude Code → local model
cx        # Codex CLI → local model

Wizard Steps

graph TD
    A[1. Discover environment] --> B[2. Install missing components]
    B --> C[3. Pick harness + engine]
    C --> D[4. Pick model]
    D --> E[5. Smoke test engine]
    E --> F[6. Wire harness]
    F --> G[7. Install helper + aliases]
    G --> H[8. Verify launch end-to-end]
    H --> I[9. Generate guide.md]

See guide.example.md for the personalized daily-use guide the wizard generates.


Usage

ccl                                             # run the interactive first-run wizard
ccl setup --harness claude --engine ollama      # skip the prefs picker
ccl setup --non-interactive                     # CI-friendly install
ccl setup --resume                              # resume after a failure
ccl find-model                                  # standalone model recommendation
ccl doctor                                      # wizard state + presence check
ccl --version                                   # print version and exit

Advanced / debug (no user binary — run as a Python module):

python -m claude_codex_local.core profile      # full hardware profile as JSON
python -m claude_codex_local.core recommend    # llmfit-only model recommendation
python -m claude_codex_local.core adapters     # list all engine adapters

Prerequisites

  • macOS or Linux with zsh or bash
  • Python 3.10+
  • At least one harness: Claude Code or Codex CLI
  • At least one engine: Ollama (recommended), LM Studio, vLLM, or llama.cpp
  • llmfit on PATH (optional — for automatic model selection)

Proven Paths

Harness Engine Model Status
Claude Code Ollama gemma4:26b Verified end-to-end
Codex CLI Ollama gemma4:26b Verified
Codex CLI Ollama qwen2.5-coder:0.5b Verified
Claude Code LM Studio Qwen3 family Blocked — 400 thinking.type; wizard warns and recommends alternatives
Any llama.cpp any Inline-env code path exists, no live proof yet
Any vLLM any New in 0.8.0 — adapter shipped with tests

Rollback

# Remove the fenced block from ~/.zshrc (between the marker lines)
rm -rf .claude-codex-local

That's it. Your ~/.claude and ~/.codex are unchanged.


Architecture details

Three layers

  1. Machine profile + model recommendation (claude_codex_local/core.py) — dumps a JSON snapshot of installed harnesses/engines/llmfit/disk, runs llmfit for ranked model recommendations, and provides a doctor command for pretty-printing wizard state.

  2. Interactive wizard (claude_codex_local/wizard.py) — 9 steps from discovery to ready-to-use daily alias. Persists progress in .claude-codex-local/wizard-state.json so --resume picks up after a failure.

  3. Helper scripts + shell aliases.claude-codex-local/bin/cc (or cx) is a short bash wrapper. For Ollama it runs ollama launch claude|codex --model <tag>. For LM Studio / llama.cpp it sets inline env vars and execs the real harness. A fenced block in ~/.zshrc / ~/.bashrc declares the aliases.

Why ollama launch

ollama launch claude --model <tag> is an official Ollama subcommand that sets the right env vars internally and execs the user's real claude binary against the local daemon — using ~/.claude as-is.

This means:

  • No duplicated ~/.claude directory
  • No custom Modelfile or ollama create
  • No ANTHROPIC_CUSTOM_MODEL_OPTION to manage manually
  • cc just works

Claude Code → LM Studio / llama.cpp env vars

Env var LM Studio llama.cpp
ANTHROPIC_BASE_URL http://localhost:1234 http://localhost:8001
ANTHROPIC_API_KEY lmstudio sk-local
ANTHROPIC_CUSTOM_MODEL_OPTION <tag> <tag>
ANTHROPIC_CUSTOM_MODEL_OPTION_NAME Local (lmstudio) <tag> Local (llamacpp) <tag>
CLAUDE_CODE_ATTRIBUTION_HEADER "0" "0"
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC "1" "1"

Codex CLI → Ollama

ollama launch codex --model <tag> -- --oss --local-provider=ollama

The --oss --local-provider=ollama flags are required after -- because Codex otherwise tries to route through the ChatGPT account and rejects non-OpenAI model names.

Qwen3 + Claude Code

Claude Code sends a thinking payload that Qwen3 reasoning models interpret as an unterminated <think> block. The wizard detects Qwen3 model names at pick time and recommends Gemma 3 or Qwen 2.5 Coder instead.

Project structure
.
├── claude_codex_local/
│   ├── __init__.py             # Package metadata + __version__
│   ├── wizard.py               # Interactive setup wizard + `ccl` CLI
│   └── core.py                 # Machine profile, engine adapters, llmfit bindings
├── scripts/
│   └── e2e_smoke.sh            # End-to-end smoke test
├── docs/
│   ├── poc-wizard.md           # 9-step wizard architecture
│   ├── poc-architecture.md     # System design overview
│   ├── poc-bootstrap.md        # Bootstrap / install flow
│   └── poc-proof.md            # Design rationale
├── tests/                      # pytest test suite
├── install.sh                  # One-command remote installer
└── pyproject.toml              # Project metadata and tool config
Tech stack
Layer Tool
Language Python 3.10+
UI / prompts questionary, rich
Linting ruff
Type checking mypy
Testing pytest + pytest-cov
Security bandit, detect-secrets
Pre-commit pre-commit
Local state

Everything written by the bridge goes under .claude-codex-local/. Override with CLAUDE_CODEX_LOCAL_STATE_DIR.

Contributing

Contributions are welcome. Read CONTRIBUTING.md before opening a PR.

For security issues, see SECURITY.md.


MIT — © 2026 Luong NGUYEN

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

claude_codex_local-0.8.0.tar.gz (88.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

claude_codex_local-0.8.0-py3-none-any.whl (49.6 kB view details)

Uploaded Python 3

File details

Details for the file claude_codex_local-0.8.0.tar.gz.

File metadata

  • Download URL: claude_codex_local-0.8.0.tar.gz
  • Upload date:
  • Size: 88.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.10

File hashes

Hashes for claude_codex_local-0.8.0.tar.gz
Algorithm Hash digest
SHA256 e93bd0b0ad5b07bdabea4f3132171e29427ee89fa1632e290e2aec29ab0a0bbb
MD5 3a6cf1d52ae2d991af6d6269cd7316ad
BLAKE2b-256 3a5b002a84ff89e4ad8f4ec415b5b404276c5163fdb05fcc1e52941c206c53b4

See more details on using hashes here.

File details

Details for the file claude_codex_local-0.8.0-py3-none-any.whl.

File metadata

File hashes

Hashes for claude_codex_local-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4d62bcd67a65f573b433f016efa8cbdfcdce7765bef56f60297896b349964f20
MD5 83184ff87883fa4cda635061cb9beb65
BLAKE2b-256 ac4c785d4c85e97e91d87840c5ab7fddb64bb1dba4ddcac73591a472dc81fe65

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page