One command launcher for running OpenCode with a local llama.cpp model.
Project description
OpenCode llama.cpp Launcher
A one command solution for launching OpenCode with any
local LLM that llama-server can serve, including models like Qwen, DeepSeek,
and Gemma. This launcher starts llama-server, waits for it to become ready,
wires the OpenAI compatible provider config into OpenCode, and cleans up when
the local agentic coding session ends.
Requirements
- Python 3.12+
- OpenCode
- llama.cpp's
llama-server - A local model supported by
llama-server, for example Qwen, DeepSeek, or Gemma
The launcher finds llama-server on PATH, or you can set llama_server in
your config.
Install
From this repository:
uv sync --dev
Check that the required external binaries are available:
uv run opencode-llama doctor
Configure
Create a project-local config in the project where you want OpenCode to run:
cp opencode-llama.example.yaml opencode-llama.yaml
Then edit opencode-llama.yaml:
model: /absolute/path/to/model.gguf
llama_server: /optional/path/to/llama-server
port: 8080
ctx_size: 8192
Config lookup order:
- The path passed with
--config opencode-llama.yamloropencode-llama.ymlin the project directory~/.config/opencode-llama.yaml
Usage
Run with an explicit config file:
uv run opencode-llama --config opencode-llama.yaml
Or pass the model directly:
uv run opencode-llama --model /absolute/path/to/model.gguf
Useful options:
uv run opencode-llama --help
uv run opencode-llama --dry-run
uv run opencode-llama --config opencode-llama.yaml
uv run opencode-llama --port 9001
uv run opencode-llama --ctx-size 8192
uv run opencode-llama --llama-server /absolute/path/to/llama-server
If llama-server fails before becoming healthy, the launcher includes a bounded
tail of the server's startup output in the error message. Successful runs stay
quiet.
Development
Run the test suite:
uv run pytest
Before publishing, check for local files:
git status --short --ignored
Do not commit local launcher configs, virtual environments, caches, build artifacts, or model paths.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file opencode_llama_cpp_launcher-0.1.0.tar.gz.
File metadata
- Download URL: opencode_llama_cpp_launcher-0.1.0.tar.gz
- Upload date:
- Size: 21.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
86e087234cb587ad809606a58c852412a1b66119c0028161214231a47a016e6a
|
|
| MD5 |
d1689e733c2b6afc850dccc7d9e0d6e9
|
|
| BLAKE2b-256 |
8197ddf5b6804c239d0496912195c6991c789ef988270f550c9c58fd40f913b9
|
Provenance
The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.0.tar.gz:
Publisher:
release.yml on ribomo/opencode-llama-cpp-launcher
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opencode_llama_cpp_launcher-0.1.0.tar.gz -
Subject digest:
86e087234cb587ad809606a58c852412a1b66119c0028161214231a47a016e6a - Sigstore transparency entry: 1482289249
- Sigstore integration time:
-
Permalink:
ribomo/opencode-llama-cpp-launcher@0c7b73a5cdd1ff6d87d7168bbf9890b2a45f2e00 -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/ribomo
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0c7b73a5cdd1ff6d87d7168bbf9890b2a45f2e00 -
Trigger Event:
push
-
Statement type:
File details
Details for the file opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl.
File metadata
- Download URL: opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl
- Upload date:
- Size: 15.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd4a35e2c2bd67afb08c35d6b6a520735f49af4025eafb1cb25399cf3e0208ba
|
|
| MD5 |
5171dffb98091828ef00914e69b8f2b4
|
|
| BLAKE2b-256 |
d995e1f33e038328601c20cbeb4dbe5183d7f878a093c309d10b3a7ae96736e7
|
Provenance
The following attestation bundles were made for opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl:
Publisher:
release.yml on ribomo/opencode-llama-cpp-launcher
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
opencode_llama_cpp_launcher-0.1.0-py3-none-any.whl -
Subject digest:
cd4a35e2c2bd67afb08c35d6b6a520735f49af4025eafb1cb25399cf3e0208ba - Sigstore transparency entry: 1482289371
- Sigstore integration time:
-
Permalink:
ribomo/opencode-llama-cpp-launcher@0c7b73a5cdd1ff6d87d7168bbf9890b2a45f2e00 -
Branch / Tag:
refs/tags/v0.1.1 - Owner: https://github.com/ribomo
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0c7b73a5cdd1ff6d87d7168bbf9890b2a45f2e00 -
Trigger Event:
push
-
Statement type: