Skip to main content

One command launcher for running OpenCode with a local llama.cpp model.

Project description

OpenCode llama.cpp Launcher

A one command solution for launching OpenCode with any local LLM that llama-server can serve, including models like Qwen, DeepSeek, and Gemma. This launcher starts llama-server, waits for it to become ready, wires the OpenAI compatible provider config into OpenCode, and cleans up when the local agentic coding session ends.

Requirements

  • Python 3.12+
  • OpenCode
  • llama.cpp's llama-server
  • A local model supported by llama-server, for example Qwen, DeepSeek, or Gemma

The launcher finds llama-server on PATH, or you can set llama_server in your config.

Install

From this repository:

uv sync --dev

Check that the required external binaries are available:

uv run opencode-llama doctor

Configure

Create a project-local config in the project where you want OpenCode to run:

cp opencode-llama.example.yaml opencode-llama.yaml

Then edit opencode-llama.yaml:

model: /absolute/path/to/model.gguf
llama_server: /optional/path/to/llama-server
port: 8080
ctx_size: 8192

Config lookup order:

  1. The path passed with --config
  2. opencode-llama.yaml or opencode-llama.yml in the project directory
  3. ~/.config/opencode-llama.yaml

Usage

Run with an explicit config file:

uv run opencode-llama --config opencode-llama.yaml

Or pass the model directly:

uv run opencode-llama --model /absolute/path/to/model.gguf

Useful options:

uv run opencode-llama --help
uv run opencode-llama --dry-run
uv run opencode-llama --config opencode-llama.yaml
uv run opencode-llama --port 9001
uv run opencode-llama --ctx-size 8192
uv run opencode-llama --llama-server /absolute/path/to/llama-server

If llama-server fails before becoming healthy, the launcher includes a bounded tail of the server's startup output in the error message. Successful runs stay quiet.

Development

Run the test suite:

uv run pytest

Before publishing, check for local files:

git status --short --ignored

Do not commit local launcher configs, virtual environments, caches, build artifacts, or model paths.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opencode_llama_cpp_launcher-0.1.0.tar.gz (21.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file opencode_llama_cpp_launcher-0.1.0.tar.gz.

File metadata

File hashes

Hashes for opencode_llama_cpp_launcher-0.1.0.tar.gz
Algorithm Hash digest
SHA256 86e087234cb587ad809606a58c852412a1b66119c0028161214231a47a016e6a
MD5 d1689e733c2b6afc850dccc7d9e0d6e9
BLAKE2b-256 8197ddf5b6804c239d0496912195c6991c789ef988270f550c9c58fd40f913b9

See more details on using hashes here.

Provenance

The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.0.tar.gz:

Publisher: release.yml on ribomo/opencode-llama-cpp-launcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cd4a35e2c2bd67afb08c35d6b6a520735f49af4025eafb1cb25399cf3e0208ba
MD5 5171dffb98091828ef00914e69b8f2b4
BLAKE2b-256 d995e1f33e038328601c20cbeb4dbe5183d7f878a093c309d10b3a7ae96736e7

See more details on using hashes here.

Provenance

The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl:

Publisher: release.yml on ribomo/opencode-llama-cpp-launcher

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page