Skip to main content

Installable CLI for running llama-server in the background and using Aider against it.

Project description

Patchforge

This repo now ships an installable CLI that wraps:

  • llama-server for a local OpenAI-compatible endpoint
  • aider-chat for file editing against that local model

The binary name is patchforge.

Install

Install the package into a tool environment:

uv tool install .

Or into a project virtualenv:

python3 -m venv .venv
./.venv/bin/pip install -e .

aider-chat is installed as a package dependency. Then bootstrap the native side:

patchforge install

patchforge install chooses the best local installation path it can find:

  • Reuse an existing llama-server if one is already on PATH
  • Otherwise, on macOS, prefer Homebrew and install llama.cpp
  • After that, prefetch the default GGUF models into the llama.cpp cache

Usage

Start the local model server in the background:

patchforge start

Check whether it is up:

patchforge status

Run Aider against that endpoint:

patchforge aider --yes-always --message "Create hello.txt with a short greeting."

You can also let the CLI ensure the server is running first:

patchforge aider --ensure-server --yes-always --message "Create hello.txt with a short greeting."

Stop the managed background server:

patchforge stop

Inspect the local /v1/models endpoint:

patchforge models

Defaults

  • Host: 127.0.0.1
  • Port: 8091
  • Model alias: gemma-local
  • Default models: cached gemma-2-9b-it first, then cached gemma-4-E4B-it
  • Runtime state: .patchforge/ under the current project

Overrides

These environment variables are supported:

LLAMA_HOST=127.0.0.1
LLAMA_PORT=8095
LLAMA_MODEL_ALIAS=my-local-model
LLAMA_MODEL_PATH=/absolute/path/to/model.gguf
LLAMA_CTX_SIZE=8192
LLAMA_PARALLEL=1
OPENAI_API_KEY=sk-local

You can also pass the same values as CLI flags such as --port, --model-alias, and --model-path.

If you want to force the Homebrew path explicitly:

patchforge install --installer brew --force-install

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

patchforge-0.1.0.tar.gz (11.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

patchforge-0.1.0-py3-none-any.whl (9.6 kB view details)

Uploaded Python 3

File details

Details for the file patchforge-0.1.0.tar.gz.

File metadata

  • Download URL: patchforge-0.1.0.tar.gz
  • Upload date:
  • Size: 11.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for patchforge-0.1.0.tar.gz
Algorithm Hash digest
SHA256 748efc281c11697ebab7b524e089419c21af9d6173f1329f51eb2c69667e9094
MD5 1dbbb2691ece28ab18a60e3a5619d8a8
BLAKE2b-256 fd173512af0cc198e61498d6d3fb92206e7aadee0a0046eecf11219bd21b9a4c

See more details on using hashes here.

File details

Details for the file patchforge-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: patchforge-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 9.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for patchforge-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 aab9f2f9d324a10756e90dbd913c16a66713e5fe6d611aed2e46241878c813c2
MD5 93224059fcae215d79030d6eee8de66b
BLAKE2b-256 0f5cf5394a055c53cc8ec72152ff65ca525862a9c28bd9ade9bdd78fa29a1af9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page