An agentic coding and automation assistant, supporting both local and cloud LLMs
Project description
PatchPal — A Claude Code–Style Agent in Python
An agentic coding and automation assistant, supporting both local and cloud LLMs.
PatchPal is an AI coding agent that helps you build software, debug issues, and automate tasks. Like Claude Code, it supports agent skills, tool use, and executable Python generation, enabling interactive workflows for tasks such as data analysis, visualization, web scraping, API interactions, and research with synthesized findings.
A key goal of this project is to approximate Claude Code's core functionality while remaining lean, accessible, and configurable, enabling learning, experimentation, and broad applicability across use cases.
$ls ./patchpal
__init__.py agent.py cli.py context.py permissions.py skills.py system_prompt.md tool_schema.py tools
Full documentation is here.
Quick Start
$ pip install patchpal # install
$ patchpal # start
Setup
-
Install:
pip install patchpal -
Get an API key or a Local LLM Engine:
- [Cloud] For Anthropic models (default): Sign up at https://console.anthropic.com/
- [Cloud] For OpenAI models: Get a key from https://platform.openai.com/
- [Local] For vLLM: Install from https://docs.vllm.ai/ (free - no API charges) Recommended for Local Use
- [Local] For Ollama: Install from https://ollama.com/ (⚠️ requires
OLLAMA_CONTEXT_LENGTH=32768- see Ollama section below) - For other providers: Check the LiteLLM documentation
-
Set up your API key as environment variable:
# For Anthropic (default)
export ANTHROPIC_API_KEY=your_api_key_here
# For OpenAI
export OPENAI_API_KEY=your_api_key_here
# For vLLM - API key required only if configured
export HOSTED_VLLM_API_BASE=http://localhost:8000 # depends on your vLLM setup
export HOSTED_VLLM_API_KEY=token-abc123 # optional depending on your vLLM setup
# For other providers, check LiteLLM docs
- Run PatchPal:
# Use default model (anthropic/claude-sonnet-4-5)
patchpal
# Use a specific model via command-line argument
patchpal --model openai/gpt-5.2-codex # or openai/gpt-5-mini, anthropic/claude-opus-4-5, etc.
# Use vLLM (local)
# Note: vLLM server must be started with --tool-call-parser and --enable-auto-tool-choice
# See "Using Local Models (vLLM & Ollama)" section below for details
export HOSTED_VLLM_API_BASE=http://localhost:8000
export HOSTED_VLLM_API_KEY=token-abc123
patchpal --model hosted_vllm/openai/gpt-oss-20b
# Use Ollama (local - requires OLLAMA_CONTEXT_LENGTH=32768)
export OLLAMA_CONTEXT_LENGTH=32768
patchpal --model ollama_chat/qwen3:32b
# Or set the model via environment variable
export PATCHPAL_MODEL=openai/gpt-5.2
patchpal
Features
- Terminal Interface similar to Claude Code
- Python API for flexibility and extensibility
- Built-In and Custom Tools
- Skills System
- Autopilot Mode using Ralph Wiggum loops
Documentation
Full documentation is available here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file patchpal-0.12.1.tar.gz.
File metadata
- Download URL: patchpal-0.12.1.tar.gz
- Upload date:
- Size: 134.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fa7395e55ccf1cfc621a18fcfd63909fd960bfaa4f88f8bf36b1c862ed66bf9d
|
|
| MD5 |
c5a2a7921b17dd2829993d5a2fd8e9a8
|
|
| BLAKE2b-256 |
c9204eab6ea18854895e9f7506cc1da8b72bc96b107355b938048ca9aa98e414
|
Provenance
The following attestation bundles were made for patchpal-0.12.1.tar.gz:
Publisher:
release.yml on wiseprobe/patchpal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
patchpal-0.12.1.tar.gz -
Subject digest:
fa7395e55ccf1cfc621a18fcfd63909fd960bfaa4f88f8bf36b1c862ed66bf9d - Sigstore transparency entry: 938234762
- Sigstore integration time:
-
Permalink:
wiseprobe/patchpal@d9156f6e583aa5a289b8e660c51836094d38d471 -
Branch / Tag:
refs/tags/0.12.1 - Owner: https://github.com/wiseprobe
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d9156f6e583aa5a289b8e660c51836094d38d471 -
Trigger Event:
release
-
Statement type:
File details
Details for the file patchpal-0.12.1-py3-none-any.whl.
File metadata
- Download URL: patchpal-0.12.1-py3-none-any.whl
- Upload date:
- Size: 102.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3661c27dba0c62948b3b0459fafb17fa574313aad481d2b5a8a6edef886ef30e
|
|
| MD5 |
f95f8b713c468395374768a02d692461
|
|
| BLAKE2b-256 |
debdf8abbd8a939c92194c8c85a3bdcd78bbdcd082adbfbf2b5a4c2d12962080
|
Provenance
The following attestation bundles were made for patchpal-0.12.1-py3-none-any.whl:
Publisher:
release.yml on wiseprobe/patchpal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
patchpal-0.12.1-py3-none-any.whl -
Subject digest:
3661c27dba0c62948b3b0459fafb17fa574313aad481d2b5a8a6edef886ef30e - Sigstore transparency entry: 938234770
- Sigstore integration time:
-
Permalink:
wiseprobe/patchpal@d9156f6e583aa5a289b8e660c51836094d38d471 -
Branch / Tag:
refs/tags/0.12.1 - Owner: https://github.com/wiseprobe
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d9156f6e583aa5a289b8e660c51836094d38d471 -
Trigger Event:
release
-
Statement type: