Skip to main content

One command launcher for running OpenCode with a local llama.cpp model.

Project description

OpenCode llama.cpp Launcher

Launch OpenCode with a local model served by llama.cpp. The launcher starts llama-server, wires OpenCode to it, and cleans up when your session ends.

OpenCode llama.cpp Launcher demo

Requirements

  • OpenCode
  • llama.cpp's llama-server
  • A local GGUF model, such as Qwen, DeepSeek, or Gemma

The launcher finds llama-server on PATH, or you can set llama_server in your config.

Install OpenCode using its GitHub installation instructions. Install llama.cpp using its installation guide.

Install

For most users, install with pipx:

pipx install opencode-llama-cpp-launcher

Or install with pip:

python -m pip install opencode-llama-cpp-launcher

Check that the required external binaries are available:

opencode-llama doctor

Configure

Create opencode-llama.yaml in the project where you want OpenCode to run, or create ~/.config/opencode-llama.yaml for a user-wide default:

model: /absolute/path/to/model.gguf
ctx_size: 8192

# Optional
port: 8080
llama_server: /optional/path/to/llama-server

Config lookup order:

  1. The path passed with --config
  2. opencode-llama.yaml or opencode-llama.yml in the project directory
  3. ~/.config/opencode-llama.yaml

Usage

Run with an explicit config file:

opencode-llama --config opencode-llama.yaml

Or pass the model directly:

opencode-llama --model /absolute/path/to/model.gguf

Useful options:

opencode-llama --help
opencode-llama --dry-run
opencode-llama --config opencode-llama.yaml
opencode-llama --port 9001
opencode-llama --ctx-size 8192
opencode-llama --llama-server /absolute/path/to/llama-server

If llama-server fails before becoming healthy, the launcher includes a bounded tail of the server's startup output in the error message. Successful runs stay quiet.

Development

Install dependencies from this repository:

uv sync --dev

Run the test suite:

uv run pytest

Before publishing, check for local files:

git status --short --ignored

Do not commit local launcher configs, virtual environments, caches, build artifacts, or model paths.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opencode_llama_cpp_launcher-0.1.4.tar.gz (328.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

opencode_llama_cpp_launcher-0.1.4-py3-none-any.whl (15.9 kB view details)

Uploaded Python 3

File details

Details for the file opencode_llama_cpp_launcher-0.1.4.tar.gz.

File metadata

File hashes

Hashes for opencode_llama_cpp_launcher-0.1.4.tar.gz
Algorithm Hash digest
SHA256 309341b4d4a51233616593d4439d7eedb2fbca852d78ce4a914e9d3cb6e46927
MD5 79c7de968c5a40ff782c37c28e633ceb
BLAKE2b-256 98c8e90e2021075bddd05d2c76f3b668dd3994bafd6411ea6203a6816ca08579

See more details on using hashes here.

Provenance

The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.4.tar.gz:

Publisher: release.yml on ribomo/opencode-llama-cpp-launcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file opencode_llama_cpp_launcher-0.1.4-py3-none-any.whl.

File metadata

File hashes

Hashes for opencode_llama_cpp_launcher-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 fbfae2e905032be14c47f2685d7711eabc325fa021d92b70ffe8a4450842400d
MD5 445267ff08daf91c57fbeb5ee6c6c266
BLAKE2b-256 3fae62a7f9d721ab0f01500088fb8716291c4fa6bc7483cb6464e9635ccb1103

See more details on using hashes here.

Provenance

The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.4-py3-none-any.whl:

Publisher: release.yml on ribomo/opencode-llama-cpp-launcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page