AI-powered commit review system for pre-commit hooks - Local (ONNX/llama.cpp) or cloud (Gemini) analysis of Git commits
Project description
PatchPatrol
AI-powered commit review system for pre-commit hooks
PatchPatrol is a flexible AI system that analyzes Git commits for code quality, coherence, and commit message clarity using local models (ONNX/llama.cpp) or cloud APIs (Gemini). Choose between fully offline local inference or powerful cloud-based analysis. It integrates seamlessly with pre-commit hooks to provide automated code review before your changes reach the repository.
Features
- Multiple AI Backends: Local (ONNX/llama.cpp) and cloud (Gemini API) options
- Privacy Options: Choose fully offline local models or powerful cloud analysis
- Automatic Model Management: Built-in model registry with automatic downloading
- Zero Setup: Works out-of-the-box in CI/CD environments
- Fast Analysis: Optimized for sub-5-second review cycles (local) or instant cloud responses
- Structured Output: Consistent JSON responses with scores and actionable feedback
- Configurable: Soft/hard modes, custom thresholds, and extensible prompts
- Pre-commit Integration: Drop-in compatibility with existing workflows
- Rich Output: Beautiful terminal output with colors and formatting
Quick Start
Installation
# Basic installation
pip install patchpatrol
# With ONNX support
pip install patchpatrol[onnx]
# With llama.cpp support
pip install patchpatrol[llama]
# With Gemini API support
pip install patchpatrol[gemini]
# With all backends
pip install patchpatrol[all]
Basic Usage
-
List available models:
patchpatrol list-models -
Test the CLI (models auto-download):
# Review staged changes with auto-downloaded model patchpatrol review-changes --model granite-3b-code # Review commit message with minimal model patchpatrol review-message --model minimal # Use cloud-based Gemini API (set GEMINI_API_KEY env var) export GEMINI_API_KEY="your-api-key" patchpatrol review-changes --model cloud # Backend is auto-detected, or specify explicitly patchpatrol review-changes --backend onnx --model distilgpt2-onnx patchpatrol review-changes --backend llama --model granite-3b-code patchpatrol review-changes --backend gemini --model gemini-2.0-flash-exp
-
Add to your pre-commit config:
# .pre-commit-config.yaml repos: - repo: https://github.com/patchpatrol/patchpatrol rev: v0.1.0 hooks: - id: patchpatrol-review-changes args: [--model=ci, --soft] # Uses fast CI-optimized model - id: patchpatrol-review-message args: [--model=cloud, --threshold=0.8] # Uses Gemini API
Perfect for CI/CD
# GitHub Actions with local models
- name: AI Code Review (Local)
run: |
pip install patchpatrol[llama]
patchpatrol review-changes --model ci --hard
# Model downloads automatically on first run
# GitHub Actions with Gemini API
- name: AI Code Review (Gemini)
env:
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
run: |
pip install patchpatrol[gemini]
patchpatrol review-changes --model cloud --hard
# No model download needed, uses API
Detailed Usage
Command Line Interface
Model Management Commands
# List all available models
patchpatrol list-models
# List only cached models
patchpatrol list-models --cached-only
# Download a specific model
patchpatrol download-model granite-3b-code
# Show cache information
patchpatrol cache-info
# Remove a cached model
patchpatrol remove-model granite-3b-code
# Clean cache (keep only specified models)
patchpatrol clean-cache --keep granite-3b-code --keep minimal
# Test Gemini API connectivity
patchpatrol test-gemini --api-key your-api-key
Review Commands
All review commands support both model names and file paths.
review-changes - Analyze Staged Changes
patchpatrol review-changes [OPTIONS]
Options:
--backend [onnx|llama|gemini] Backend (auto-detected if not specified)
--model NAME_OR_PATH Model name or path (required)
--device [cpu|cuda|cloud] Compute device (default: cpu, cloud for API models)
--threshold FLOAT Minimum acceptance score 0.0-1.0 (default: 0.7)
--temperature FLOAT Sampling temperature 0.0-1.0 (default: 0.2)
--max-new-tokens INTEGER Maximum tokens to generate (default: 512)
--top-p FLOAT Top-p sampling 0.0-1.0 (default: 0.9)
--soft/--hard Soft warnings vs hard blocking (default: soft)
--repo-path PATH Git repository path (default: current)
Examples:
# Using local model names (auto-download)
patchpatrol review-changes --model granite-3b-code
patchpatrol review-changes --model ci --hard
# Using cloud models (Gemini API)
export GEMINI_API_KEY="your-api-key"
patchpatrol review-changes --model cloud
patchpatrol review-changes --model gemini-2.0-flash-exp --backend gemini
# Using file paths
patchpatrol review-changes --model ./models/my-model.onnx # ONNX directory/file
patchpatrol review-changes --model ./models/my-model.gguf # GGUF file
# Backend auto-detection
patchpatrol review-changes --model distilgpt2-onnx # auto-detects onnx backend
patchpatrol review-changes --model granite-3b-code # auto-detects llama backend
patchpatrol review-changes --model cloud # auto-detects gemini backend
review-message - Analyze Commit Messages
patchpatrol review-message [OPTIONS] [COMMIT_MSG_FILE]
# Same options as review-changes
# COMMIT_MSG_FILE: Path to commit message file (auto-detected if not provided)
review-complete - Comprehensive Review
patchpatrol review-complete [OPTIONS] [COMMIT_MSG_FILE]
# Reviews both staged changes and commit message together
Pre-commit Integration
PatchPatrol provides several pre-configured hooks:
repos:
- repo: https://github.com/patchpatrol/patchpatrol
rev: v0.1.0
hooks:
# Standard hooks
- id: patchpatrol-review-changes # Review staged changes (hard mode)
- id: patchpatrol-review-message # Review commit message (hard mode)
- id: patchpatrol-review-complete # Complete review (hard mode)
# Soft mode hooks (warnings only)
- id: patchpatrol-changes-soft # Review changes (soft mode)
- id: patchpatrol-message-soft # Review message (soft mode)
Custom Configuration Examples
Team Configuration
# .pre-commit-config.yaml
repos:
- repo: https://github.com/patchpatrol/patchpatrol
rev: v0.1.0
hooks:
- id: patchpatrol-review-changes
args:
- --backend=onnx
- --model=/shared/models/granite-8b-code/
- --threshold=0.85
- --device=cuda
- --hard
- id: patchpatrol-review-message
args:
- --backend=onnx
- --model=/shared/models/commit-reviewer-onnx
- --threshold=0.8
- --soft
Developer-specific Configuration
# For developers with different hardware/preferences
repos:
- repo: https://github.com/patchpatrol/patchpatrol
rev: v0.1.0
hooks:
- id: patchpatrol-review-changes
args:
- --backend=onnx
- --model=~/models/granite-3b-code/ # Smaller model for laptops
- --threshold=0.7
- --soft # Warnings only for dev workflow
Models
Built-in Model Registry
PatchPatrol includes a curated registry of tested models that download automatically:
| Model Name | Backend | Size | Description | Best For |
|---|---|---|---|---|
distilgpt2-onnx |
onnx | ~350MB | DistilGPT2 ONNX - Minimal size | Resource-constrained environments |
granite-3b-code |
llama | ~1.8GB | IBM Granite 3B - Fast, lightweight | CI/CD, quick reviews |
granite-8b-code |
llama | ~4.5GB | IBM Granite 8B - Balanced quality | General use |
codellama-7b |
llama | ~4.1GB | Meta CodeLlama 7B - Excellent code review | High-quality analysis |
codegemma-2b |
llama | ~1.6GB | Google CodeGemma 2B - Ultra-fast | Quick local reviews |
gemini-2.0-flash-exp |
gemini | API | Google Gemini 2.0 Flash Experimental - Latest experimental model | Advanced code analysis |
gemini-2.0-flash |
gemini | API | Google Gemini 2.0 Flash - Stable fast model | Quick cloud reviews |
gemini-2.5-pro |
gemini | API | Google Gemini 2.5 Pro - Future model (restricted access) | Future advanced analysis |
Quick Access Aliases
| Alias | Model | Purpose |
|---|---|---|
ci |
granite-3b-code |
Fast CI/CD reviews |
dev |
granite-3b-code |
Development workflow |
quality |
codellama-7b |
High-quality local analysis |
minimal |
codegemma-2b |
Smallest/fastest local option |
cloud |
gemini-2.0-flash |
Fast cloud-based reviews |
premium |
gemini-2.0-flash-exp |
Premium cloud analysis |
Model Management
# List all available models
patchpatrol list-models
# Download a specific model
patchpatrol download-model granite-3b-code
# Check cache status
patchpatrol cache-info
# Clean up old models
patchpatrol clean-cache --keep ci --keep quality
Custom Models
You can still use custom models by providing file paths:
# ONNX models (directory containing model files)
patchpatrol review-changes --model ./my-models/custom-onnx/
# ONNX models (single file)
patchpatrol review-changes --model ./my-models/custom.onnx
# llama.cpp models (GGUF files)
patchpatrol review-changes --model ./my-models/custom.gguf
# Backend auto-detection works with file paths too
patchpatrol review-changes --model ./models/mymodel.onnx # detects onnx backend
patchpatrol review-changes --model ./models/mymodel.gguf # detects llama backend
Model Export (Advanced)
For custom ONNX models:
# Export a HuggingFace model to ONNX
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoTokenizer
model = ORTModelForCausalLM.from_pretrained(
"your-model-name",
export=True
)
tokenizer = AutoTokenizer.from_pretrained("your-model-name")
model.save_pretrained("./models/custom-onnx")
tokenizer.save_pretrained("./models/custom-onnx")
API Models (Gemini)
For cloud-based models, you need to set up API credentials:
# Set your Gemini API key
export GEMINI_API_KEY="your-api-key-here"
# Test connectivity
patchpatrol test-gemini
# Use in reviews
patchpatrol review-changes --model gemini-2.0-flash-exp
patchpatrol review-changes --model cloud # Uses gemini-2.0-flash
Get your API key: Google AI Studio
Benefits of API models:
- No local storage required (0 MB disk usage)
- Latest model capabilities
- No GPU needed
- Instant startup (no model loading)
Considerations:
- Requires internet connection
- API costs (typically $0.001-0.01 per review)
- Data sent to Google (code/commits)
- Rate limiting may apply
Output Format
PatchPatrol generates structured JSON responses:
{
"score": 0.85,
"verdict": "approve",
"comments": [
"Well-structured code changes with clear intent",
"Good test coverage for new functionality",
"Consider adding inline documentation for complex logic"
]
}
The CLI presents this as rich, colored output:
✓ APPROVE | Score: 0.85
Comments:
1. Well-structured code changes with clear intent
2. Good test coverage for new functionality
3. Consider adding inline documentation for complex logic
✓ Staged changes approved!
Configuration Options
Modes
- Soft Mode (
--soft): Shows warnings but allows commits to proceed - Hard Mode (
--hard): Blocks commits that don't meet threshold
Thresholds
0.9-1.0: Exceptional quality required0.8-0.9: High quality standard0.7-0.8: Good quality (default)0.6-0.7: Basic quality checks<0.6: Very permissive
Backend Selection
| Backend | Best For | Requirements |
|---|---|---|
onnx |
High accuracy, custom models | pip install patchpatrol[onnx] |
llama |
Code-optimized models, GGUF support | pip install patchpatrol[llama] |
gemini |
Cloud-based, no local storage | pip install patchpatrol[gemini] + API key |
Advanced Usage
Custom Prompt Templates
Advanced users can customize prompts by modifying environment variables:
export PATCHPATROL_SYSTEM_PROMPT="Your custom system prompt..."
export PATCHPATROL_USER_TEMPLATE_CHANGES="Your custom diff template..."
Performance Tuning
# Fast inference
patchpatrol review-changes \
--temperature 0.1 \
--max-new-tokens 256 \
--device cpu
# High quality
patchpatrol review-changes \
--temperature 0.3 \
--max-new-tokens 1024 \
--device cuda
# Cloud-based with Gemini
GEMINI_API_KEY="your-key" patchpatrol review-changes \
--backend gemini \
--model gemini-2.0-flash-exp \
--temperature 0.1
Repository-specific Configuration
Create .patchpatrol.toml:
[patchpatrol]
backend = "llama" # Can be "onnx", "llama", or "gemini"
model = "granite-3b-code" # Model name from registry
threshold = 0.8
device = "cuda" # Ignored for API models
soft_mode = false
[patchpatrol.prompts]
custom_instructions = "Focus on security and performance..."
[patchpatrol.env]
# Environment variables (optional)
gemini_api_key = "your-api-key" # Or use GEMINI_API_KEY env var
CI/CD Integration
PatchPatrol is perfect for CI/CD pipelines with zero-setup automatic model downloading:
GitHub Actions
name: AI Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install PatchPatrol
run: pip install patchpatrol[llama]
- name: Review Changes
run: patchpatrol review-changes --model ci --hard
GitLab CI
ai_review:
stage: test
image: python:3.11
script:
- pip install patchpatrol[llama]
- patchpatrol review-changes --model ci --hard
only:
- merge_requests
Jenkins
pipeline {
agent any
stages {
stage('AI Review') {
steps {
sh '''
pip install patchpatrol[llama]
patchpatrol review-changes --model ci --hard
'''
}
}
}
}
Docker
FROM python:3.11-slim
RUN pip install patchpatrol[llama]
# Models will be cached in /root/.cache/patchpatrol/models
VOLUME ["/root/.cache/patchpatrol"]
ENTRYPOINT ["patchpatrol"]
Performance in CI
Models are cached after first download:
| Model | Download Time | First Run | Subsequent Runs |
|---|---|---|---|
ci (granite-3b-code) |
~2 min | ~15 sec | ~5 sec |
minimal (codegemma-2b) |
~1.5 min | ~8 sec | ~3 sec |
quality (codellama-7b) |
~3 min | ~20 sec | ~7 sec |
Development
Building from Source
git clone https://github.com/patchpatrol/patchpatrol.git
cd patchpatrol
pip install -e .[all]
Running Tests
pytest tests/
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
Requirements
- Python >= 3.10
- Git repository
- One of:
- ONNX Runtime + Transformers (for ONNX backend)
- llama-cpp-python (for llama.cpp backend)
- Google GenerativeAI (for Gemini API backend)
System Requirements
| Component | Minimum | Recommended |
|---|---|---|
| RAM | 4GB | 8GB+ |
| Storage | 2GB | 10GB+ |
| CPU | 2 cores | 4+ cores |
| GPU | None | CUDA-compatible (optional) |
Security & Privacy
Local Models (ONNX/llama.cpp)
- No Network Calls: All inference happens locally
- No Data Collection: Your code never leaves your machine
- Secure by Default: Models run in isolated processes
- Audit Trail: All decisions are logged locally
Cloud Models (Gemini API)
- API Communication: Code/commits sent to Google for analysis
- Privacy Policy: Subject to Google's privacy policies
- Data Handling: Follow Google AI Studio terms of service
- API Security: Uses HTTPS encryption for data transmission
- No Permanent Storage: Google doesn't store your code for training (per API terms)
Choosing Your Privacy Level
- Maximum Privacy: Use local models (
--model granite-3b-code,--model ci) - Balanced Approach: Use cloud for public repos, local for sensitive code
- Cloud Benefits: Latest AI capabilities, no local storage requirements
Troubleshooting
Common Issues
Model Loading Errors:
# Check model path
ls -la ./models/your-model/
# Verify dependencies
pip install patchpatrol[llama] --upgrade
Permission Errors:
# Ensure Git repository access
git status
# Check file permissions
chmod +x ~/.local/bin/patchpatrol
Performance Issues:
# Reduce context size
patchpatrol review-changes --max-new-tokens 256
# Use CPU-optimized models
patchpatrol review-changes --device cpu
Debug Mode
patchpatrol --verbose review-changes --model ./models/debug-model
License
MIT License - see LICENSE file for details.
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Full Docs
Made with care for developers who value code quality
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file patchpatrol-0.3.0.tar.gz.
File metadata
- Download URL: patchpatrol-0.3.0.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e3e3d8a1e2e947cb0d3b06e2e3eb51cf7baea6079fc8cdda55f4288f20703d41
|
|
| MD5 |
01a1df496825bef375070ee2ec58fae3
|
|
| BLAKE2b-256 |
552253461b15c8a6646efce9b21793e116d85eceadc3d4c04f45a4cb57b103ec
|
Provenance
The following attestation bundles were made for patchpatrol-0.3.0.tar.gz:
Publisher:
publish.yaml on 4383/patchpatrol
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
patchpatrol-0.3.0.tar.gz -
Subject digest:
e3e3d8a1e2e947cb0d3b06e2e3eb51cf7baea6079fc8cdda55f4288f20703d41 - Sigstore transparency entry: 612211651
- Sigstore integration time:
-
Permalink:
4383/patchpatrol@e7f1526c1120ef18019970b0d1e9cadcae3101c4 -
Branch / Tag:
refs/tags/0.3.0 - Owner: https://github.com/4383
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@e7f1526c1120ef18019970b0d1e9cadcae3101c4 -
Trigger Event:
push
-
Statement type:
File details
Details for the file patchpatrol-0.3.0-py3-none-any.whl.
File metadata
- Download URL: patchpatrol-0.3.0-py3-none-any.whl
- Upload date:
- Size: 40.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b94d78876754898729971ad3793b784cff6a01e751404856500c3bb58ff3e8fe
|
|
| MD5 |
e528f34945f4295b2b8e7ff7b05effd6
|
|
| BLAKE2b-256 |
452fe68bda630d9a6a376710f398c5b666c4801fe7e69deb0571c0e9559c5299
|
Provenance
The following attestation bundles were made for patchpatrol-0.3.0-py3-none-any.whl:
Publisher:
publish.yaml on 4383/patchpatrol
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
patchpatrol-0.3.0-py3-none-any.whl -
Subject digest:
b94d78876754898729971ad3793b784cff6a01e751404856500c3bb58ff3e8fe - Sigstore transparency entry: 612211660
- Sigstore integration time:
-
Permalink:
4383/patchpatrol@e7f1526c1120ef18019970b0d1e9cadcae3101c4 -
Branch / Tag:
refs/tags/0.3.0 - Owner: https://github.com/4383
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@e7f1526c1120ef18019970b0d1e9cadcae3101c4 -
Trigger Event:
push
-
Statement type: