Skip to main content

One command launcher for running OpenCode with a local llama.cpp model.

Project description

OpenCode llama.cpp Launcher

Launch OpenCode with a local model served by llama.cpp. The launcher starts llama-server, wires OpenCode to it, and cleans up when your session ends.

OpenCode llama.cpp Launcher demo

Requirements

  • OpenCode
  • llama.cpp's llama-server
  • A local GGUF model, such as Qwen, DeepSeek, or Gemma

The launcher finds llama-server on PATH, or you can set llama_server in your config.

Install OpenCode using its GitHub installation instructions. Install llama.cpp using its installation guide.

Install

For most users, install with pipx:

pipx install opencode-llama-cpp-launcher

Or install with pip:

python -m pip install opencode-llama-cpp-launcher

Check that the required external binaries are available:

opencode-llama doctor

Configure

Create opencode-llama.yaml in the project where you want OpenCode to run, or create ~/.config/opencode-llama.yaml for a user-wide default:

model: /absolute/path/to/model.gguf
ctx_size: 8192

# Optional
port: 8080
llama_server: /optional/path/to/llama-server

Config lookup order:

  1. The path passed with --config
  2. opencode-llama.yaml or opencode-llama.yml in the project directory
  3. ~/.config/opencode-llama.yaml

Usage

Run with an explicit config file:

opencode-llama --config opencode-llama.yaml

Or pass the model directly:

opencode-llama --model /absolute/path/to/model.gguf

Useful options:

opencode-llama --help
opencode-llama --dry-run
opencode-llama --config opencode-llama.yaml
opencode-llama --port 9001
opencode-llama --ctx-size 8192
opencode-llama --llama-server /absolute/path/to/llama-server

If llama-server fails before becoming healthy, the launcher includes a bounded tail of the server's startup output in the error message. Successful runs stay quiet.

Development

Install dependencies from this repository:

uv sync --dev

Run the test suite:

uv run pytest

Before publishing, check for local files:

git status --short --ignored

Do not commit local launcher configs, virtual environments, caches, build artifacts, or model paths.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opencode_llama_cpp_launcher-0.1.5.tar.gz (328.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

opencode_llama_cpp_launcher-0.1.5-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file opencode_llama_cpp_launcher-0.1.5.tar.gz.

File metadata

File hashes

Hashes for opencode_llama_cpp_launcher-0.1.5.tar.gz
Algorithm Hash digest
SHA256 c9a78e6a1a55ad5705bb8e3218bdd9b5911ba9237e18ec26c9f76cac672a61a5
MD5 7cb661add5ee2e74e4c6a71a34341236
BLAKE2b-256 43d7dc1e67565d214e7a6aa59b6b62f1a8027f168a1f47ad241dd01b44f3a20d

See more details on using hashes here.

Provenance

The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.5.tar.gz:

Publisher: release.yml on ribomo/opencode-llama-cpp-launcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file opencode_llama_cpp_launcher-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for opencode_llama_cpp_launcher-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 9d7122d4dd3bd08cca56628383878f2dfdfba11ce93eb06040102c1e804c05f3
MD5 3d2c39a8016766a278a6d6891f46206a
BLAKE2b-256 4b5e109f7e16142489214a9d31e66330bcb690bd8e1983adf3c66830c6b7a478

See more details on using hashes here.

Provenance

The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.5-py3-none-any.whl:

Publisher: release.yml on ribomo/opencode-llama-cpp-launcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page