Skip to main content

Lightweight tool to manage contexts and update code with LLMs

Project description

PatchLLM Logo

About

PatchLLM is a command-line tool that lets you flexibly build LLM context from your codebase using glob patterns, URLs, and keyword searches. It then automatically applies file edits directly from the LLM's response.

Usage

PatchLLM is designed to be used directly from your terminal.

1. Initialize a Configuration

The easiest way to get started is to run the interactive initializer. This will create a configs.py file for you.

patchllm --init

This will guide you through creating your first context configuration, including setting a base path and file patterns. You can add multiple configurations to this file.

A generated configs.py might look like this:

# configs.py
configs = {
    "default": {
        "path": ".",
        "include_patterns": ["**/*.py"],
        "exclude_patterns": ["**/tests/*", "venv/*"],
        "urls": ["https://docs.python.org/3/library/argparse.html"]
    },
    "docs": {
        "path": "./docs",
        "include_patterns": ["**/*.md"],
    }
}

2. Run a Task

Use the patchllm command with a configuration name and a task instruction.

# Apply a change using the 'default' configuration
patchllm --config default --task "Add type hints to the main function in main.py"

The tool will then:

  1. Build a context from the files and URLs matching your configuration.
  2. Send the context and your task to the configured LLM.
  3. Parse the response and automatically write the changes to the relevant files.

All Commands & Options

Configuration Management

  • --init: Create a new configuration interactively.
  • --list-configs: List all available configurations from your configs.py.
  • --show-config <name>: Display the settings for a specific configuration.

Core Task Execution

  • --config <name>: The name of the configuration to use for building context.
  • --task "<instruction>": The task instruction for the LLM.
  • --model <model_name>: Specify a different model (e.g., claude-3-opus). Defaults to gemini/gemini-1.5-flash.

Context Handling

  • --context-out [filename]: Save the generated context to a file (defaults to context.md) instead of sending it to the LLM.
  • --context-in <filename>: Use a previously saved context file directly, skipping context generation.
  • --update False: A flag to prevent sending the prompt to the LLM. Useful when you only want to generate and save the context with --context-out.

Alternative Inputs

  • --from-file <filename>: Apply file patches directly from a local file instead of from an LLM response.
  • --from-clipboard: Apply file patches directly from your clipboard content.
  • --voice True: Use voice recognition to provide the task instruction. Requires extra dependencies.

Setup

PatchLLM uses LiteLLM under the hood. Please refer to their documentation for setting up API keys (e.g., OPENAI_API_KEY, GEMINI_API_KEY) in a .env file and for a full list of available models.

To use the voice feature (--voice True), you will need to install extra dependencies:

pip install "speechrecognition>=3.10" "pyttsx3>=2.90"
# Note: speechrecognition may require PyAudio, which might have system-level dependencies.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

patchllm-0.2.1.tar.gz (13.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

patchllm-0.2.1-py3-none-any.whl (14.2 kB view details)

Uploaded Python 3

File details

Details for the file patchllm-0.2.1.tar.gz.

File metadata

  • Download URL: patchllm-0.2.1.tar.gz
  • Upload date:
  • Size: 13.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for patchllm-0.2.1.tar.gz
Algorithm Hash digest
SHA256 b7879fceef66c139c683f57fff383dce7d95870098515d97f2f49752b9a24507
MD5 dc3bbaeae5b73bb76a3365a9957c70a5
BLAKE2b-256 f8e4e1d5102747fc09689d34d1ae3e56c6a96d06bae435cb59b0106d9a9dc7b1

See more details on using hashes here.

File details

Details for the file patchllm-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: patchllm-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 14.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for patchllm-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b0049ca3891b9f695515cfffb8d2505ffa521044d1f86477a31fcea62cd2700b
MD5 5ee06c641f7f7fde8a9e34145e8f41dd
BLAKE2b-256 2415d0a4e75ea0b3a63e6614975fe1578b409b30f4052b6a1767fe8643907c6d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page