Skip to main content

AI-powered commit review system for pre-commit hooks - Local (ONNX) or cloud (Gemini) analysis of Git commits

Project description

PatchPatrol Logo

PatchPatrol

Tests PyPI version Python versions Development Status Downloads Downloads per month License Code style: black

AI-powered commit review system for pre-commit hooks

PatchPatrol is a flexible AI system that analyzes Git commits for code quality, coherence, and commit message clarity using local models (ONNX) or cloud APIs (Gemini). Choose between fully offline local inference or powerful cloud-based analysis. It integrates seamlessly with pre-commit hooks to provide automated code review before your changes reach the repository.

Features

  • Multiple AI Backends: Local (ONNX) and cloud (Gemini API) options
  • Privacy Options: Choose fully offline local models or powerful cloud analysis
  • Automatic Model Management: Built-in model registry with automatic downloading
  • Zero Setup: Works out-of-the-box in CI/CD environments
  • Fast Analysis: Optimized for sub-5-second review cycles (local) or instant cloud responses
  • Structured Output: Consistent JSON responses with scores and actionable feedback
  • Configurable: Soft/hard modes, custom thresholds, and extensible prompts
  • Pre-commit Integration: Drop-in compatibility with existing workflows
  • Rich Output: Beautiful terminal output with colors and formatting

Quick Start

Installation

# Basic installation
pip install patchpatrol

# With ONNX support
pip install patchpatrol[onnx]

# With Gemini API support
pip install patchpatrol[gemini]

# With all backends
pip install patchpatrol[all]

Basic Usage

  1. List available models:

    patchpatrol list-models
    
  2. Test the CLI (models auto-download):

    # Review staged changes with auto-downloaded model
    patchpatrol review-changes --model granite-3b-code
    
    # Review commit message with minimal model
    patchpatrol review-message --model minimal
    
    # Use cloud-based Gemini API (set GEMINI_API_KEY env var)
    export GEMINI_API_KEY="your-api-key"
    patchpatrol review-changes --model cloud
    
    # Backend is auto-detected, or specify explicitly
    patchpatrol review-changes --backend onnx --model granite-3b-code
    patchpatrol review-changes --backend gemini --model gemini-2.0-flash-exp
    
  3. Add to your pre-commit config:

    # .pre-commit-config.yaml
    repos:
      - repo: https://github.com/patchpatrol/patchpatrol
        rev: v0.1.0
        hooks:
          - id: patchpatrol-review-changes
            args: [--model=ci, --soft]  # Uses fast CI-optimized model
          - id: patchpatrol-review-message
            args: [--model=cloud, --threshold=0.8]  # Uses Gemini API
    

Perfect for CI/CD

# GitHub Actions with local models
- name: AI Code Review (Local)
  run: |
    pip install patchpatrol[onnx]
    patchpatrol review-changes --model ci --hard
    # Model downloads automatically on first run

# GitHub Actions with Gemini API
- name: AI Code Review (Gemini)
  env:
    GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
  run: |
    pip install patchpatrol[gemini]
    patchpatrol review-changes --model cloud --hard
    # No model download needed, uses API

Detailed Usage

Command Line Interface

Model Management Commands

# List all available models
patchpatrol list-models

# List only cached models
patchpatrol list-models --cached-only

# Download a specific model
patchpatrol download-model granite-3b-code

# Show cache information
patchpatrol cache-info

# Remove a cached model
patchpatrol remove-model granite-3b-code

# Clean cache (keep only specified models)
patchpatrol clean-cache --keep granite-3b-code --keep minimal

# Test Gemini API connectivity
patchpatrol test-gemini --api-key your-api-key

Review Commands

All review commands support both model names and file paths.

review-changes - Analyze Staged Changes
patchpatrol review-changes [OPTIONS]

Options:
  --backend [onnx|gemini]  Backend (auto-detected if not specified)
  --model NAME_OR_PATH       Model name or path (required)
  --device [cpu|cuda|cloud]  Compute device (default: cpu, cloud for API models)
  --threshold FLOAT          Minimum acceptance score 0.0-1.0 (default: 0.7)
  --temperature FLOAT        Sampling temperature 0.0-1.0 (default: 0.2)
  --max-new-tokens INTEGER   Maximum tokens to generate (default: 512)
  --top-p FLOAT             Top-p sampling 0.0-1.0 (default: 0.9)
  --soft/--hard             Soft warnings vs hard blocking (default: soft)
  --repo-path PATH          Git repository path (default: current)

Examples:

# Using local model names (auto-download)
patchpatrol review-changes --model granite-3b-code
patchpatrol review-changes --model ci --hard

# Using cloud models (Gemini API)
export GEMINI_API_KEY="your-api-key"
patchpatrol review-changes --model cloud
patchpatrol review-changes --model gemini-2.0-flash-exp --backend gemini

# Using file paths
patchpatrol review-changes --model ./models/my-model/

# Backend auto-detection
patchpatrol review-changes --model granite-3b-code # auto-detects onnx backend
patchpatrol review-changes --model cloud           # auto-detects gemini backend
review-message - Analyze Commit Messages
patchpatrol review-message [OPTIONS] [COMMIT_MSG_FILE]

# Same options as review-changes
# COMMIT_MSG_FILE: Path to commit message file (auto-detected if not provided)
review-complete - Comprehensive Review
patchpatrol review-complete [OPTIONS] [COMMIT_MSG_FILE]

# Reviews both staged changes and commit message together

Pre-commit Integration

PatchPatrol provides several pre-configured hooks:

repos:
  - repo: https://github.com/patchpatrol/patchpatrol
    rev: v0.1.0
    hooks:
      # Standard hooks
      - id: patchpatrol-review-changes      # Review staged changes (hard mode)
      - id: patchpatrol-review-message      # Review commit message (hard mode)
      - id: patchpatrol-review-complete     # Complete review (hard mode)

      # Soft mode hooks (warnings only)
      - id: patchpatrol-changes-soft        # Review changes (soft mode)
      - id: patchpatrol-message-soft        # Review message (soft mode)

Custom Configuration Examples

Team Configuration

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/patchpatrol/patchpatrol
    rev: v0.1.0
    hooks:
      - id: patchpatrol-review-changes
        args:
          - --backend=onnx
          - --model=/shared/models/granite-8b-code/
          - --threshold=0.85
          - --device=cuda
          - --hard
      - id: patchpatrol-review-message
        args:
          - --backend=onnx
          - --model=/shared/models/commit-reviewer-onnx
          - --threshold=0.8
          - --soft

Developer-specific Configuration

# For developers with different hardware/preferences
repos:
  - repo: https://github.com/patchpatrol/patchpatrol
    rev: v0.1.0
    hooks:
      - id: patchpatrol-review-changes
        args:
          - --backend=onnx
          - --model=~/models/granite-3b-code/  # Smaller model for laptops
          - --threshold=0.7
          - --soft                            # Warnings only for dev workflow

Models

Built-in Model Registry

PatchPatrol includes a curated registry of tested models that download automatically:

Model Name Backend Size Description Best For
granite-3b-code onnx ~1.8GB IBM Granite 3B - Fast, lightweight CI/CD, quick reviews
granite-8b-code onnx ~4.5GB IBM Granite 8B - Balanced quality General use
distilgpt2-onnx onnx ~350MB DistilGPT2 ONNX - Minimal size Resource-constrained environments
gemini-2.0-flash-exp gemini API Google Gemini 2.0 Flash Experimental - Latest experimental model Advanced code analysis
gemini-2.0-flash gemini API Google Gemini 2.0 Flash - Stable fast model Quick cloud reviews
gemini-2.5-pro gemini API Google Gemini 2.5 Pro - Future model (restricted access) Future advanced analysis

Quick Access Aliases

Alias Model Purpose
ci granite-3b-code Fast CI/CD reviews
dev granite-3b-code Development workflow
quality granite-8b-code High-quality analysis
minimal distilgpt2-onnx Smallest/fastest option
cloud gemini-2.0-flash Fast cloud-based reviews
premium gemini-2.0-flash-exp Premium cloud analysis

Model Management

# List all available models
patchpatrol list-models

# Download a specific model
patchpatrol download-model granite-3b-code

# Check cache status
patchpatrol cache-info

# Clean up old models
patchpatrol clean-cache --keep ci --keep quality

Custom Models

You can still use custom models by providing file paths:

# ONNX models (directory containing model files)
patchpatrol review-changes --model ./my-models/custom-onnx/

# ONNX models (single file)
patchpatrol review-changes --model ./my-models/custom.onnx

# Backend auto-detection works with file paths too
patchpatrol review-changes --model ./models/mymodel/  # detects onnx backend

Model Export (Advanced)

For custom ONNX models:

# Export a HuggingFace model to ONNX
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoTokenizer

model = ORTModelForCausalLM.from_pretrained(
    "your-model-name",
    export=True
)
tokenizer = AutoTokenizer.from_pretrained("your-model-name")

model.save_pretrained("./models/custom-onnx")
tokenizer.save_pretrained("./models/custom-onnx")

API Models (Gemini)

For cloud-based models, you need to set up API credentials:

# Set your Gemini API key
export GEMINI_API_KEY="your-api-key-here"

# Test connectivity
patchpatrol test-gemini

# Use in reviews
patchpatrol review-changes --model gemini-2.0-flash-exp
patchpatrol review-changes --model cloud  # Uses gemini-2.0-flash

Get your API key: Google AI Studio

Benefits of API models:

  • No local storage required (0 MB disk usage)
  • Latest model capabilities
  • No GPU needed
  • Instant startup (no model loading)

Considerations:

  • Requires internet connection
  • API costs (typically $0.001-0.01 per review)
  • Data sent to Google (code/commits)
  • Rate limiting may apply

Output Format

PatchPatrol generates structured JSON responses:

{
  "score": 0.85,
  "verdict": "approve",
  "comments": [
    "Well-structured code changes with clear intent",
    "Good test coverage for new functionality",
    "Consider adding inline documentation for complex logic"
  ]
}

The CLI presents this as rich, colored output:

✓ APPROVE | Score: 0.85

Comments:
  1. Well-structured code changes with clear intent
  2. Good test coverage for new functionality
  3. Consider adding inline documentation for complex logic

✓ Staged changes approved!

Configuration Options

Modes

  • Soft Mode (--soft): Shows warnings but allows commits to proceed
  • Hard Mode (--hard): Blocks commits that don't meet threshold

Thresholds

  • 0.9-1.0: Exceptional quality required
  • 0.8-0.9: High quality standard
  • 0.7-0.8: Good quality (default)
  • 0.6-0.7: Basic quality checks
  • <0.6: Very permissive

Backend Selection

Backend Best For Requirements
onnx High accuracy, custom models pip install patchpatrol[onnx]
gemini Cloud-based, no local storage pip install patchpatrol[gemini] + API key

Advanced Usage

Custom Prompt Templates

Advanced users can customize prompts by modifying environment variables:

export PATCHPATROL_SYSTEM_PROMPT="Your custom system prompt..."
export PATCHPATROL_USER_TEMPLATE_CHANGES="Your custom diff template..."

Performance Tuning

# Fast inference
patchpatrol review-changes \
  --temperature 0.1 \
  --max-new-tokens 256 \
  --device cpu

# High quality
patchpatrol review-changes \
  --temperature 0.3 \
  --max-new-tokens 1024 \
  --device cuda

# Cloud-based with Gemini
GEMINI_API_KEY="your-key" patchpatrol review-changes \
  --backend gemini \
  --model gemini-2.0-flash-exp \
  --temperature 0.1

Repository-specific Configuration

Create .patchpatrol.toml:

[patchpatrol]
backend = "gemini"         # Can be "onnx" or "gemini"
model = "gemini-2.0-flash-exp"   # Model name from registry
threshold = 0.8
device = "cuda"            # Ignored for API models
soft_mode = false

[patchpatrol.prompts]
custom_instructions = "Focus on security and performance..."

[patchpatrol.env]
# Environment variables (optional)
gemini_api_key = "your-api-key"  # Or use GEMINI_API_KEY env var

CI/CD Integration

PatchPatrol is perfect for CI/CD pipelines with zero-setup automatic model downloading:

GitHub Actions

name: AI Code Review
on: [pull_request]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-python@v4
        with:
          python-version: '3.11'

      - name: Install PatchPatrol
        run: pip install patchpatrol[onnx]

      - name: Review Changes
        run: patchpatrol review-changes --model ci --hard

GitLab CI

ai_review:
  stage: test
  image: python:3.11
  script:
    - pip install patchpatrol[onnx]
    - patchpatrol review-changes --model ci --hard
  only:
    - merge_requests

Jenkins

pipeline {
    agent any
    stages {
        stage('AI Review') {
            steps {
                sh '''
                    pip install patchpatrol[onnx]
                    patchpatrol review-changes --model ci --hard
                '''
            }
        }
    }
}

Docker

FROM python:3.11-slim

RUN pip install patchpatrol[onnx]

# Models will be cached in /root/.cache/patchpatrol/models
VOLUME ["/root/.cache/patchpatrol"]

ENTRYPOINT ["patchpatrol"]

Performance in CI

Models are cached after first download:

Model Download Time First Run Subsequent Runs
ci (granite-3b-code) ~2 min ~15 sec ~5 sec
minimal (distilgpt2-onnx) ~30 sec ~5 sec ~2 sec
quality (granite-8b-code) ~3 min ~25 sec ~8 sec

Development

Building from Source

git clone https://github.com/patchpatrol/patchpatrol.git
cd patchpatrol
pip install -e .[all]

Running Tests

pytest tests/

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Submit a pull request

Requirements

  • Python >= 3.10
  • Git repository
  • One of:
    • ONNX Runtime + Transformers (for ONNX backend)
    • Google GenerativeAI (for Gemini API backend)

System Requirements

Component Minimum Recommended
RAM 4GB 8GB+
Storage 2GB 10GB+
CPU 2 cores 4+ cores
GPU None CUDA-compatible (optional)

Security & Privacy

Local Models (ONNX)

  • No Network Calls: All inference happens locally
  • No Data Collection: Your code never leaves your machine
  • Secure by Default: Models run in isolated processes
  • Audit Trail: All decisions are logged locally

Cloud Models (Gemini API)

  • API Communication: Code/commits sent to Google for analysis
  • Privacy Policy: Subject to Google's privacy policies
  • Data Handling: Follow Google AI Studio terms of service
  • API Security: Uses HTTPS encryption for data transmission
  • No Permanent Storage: Google doesn't store your code for training (per API terms)

Choosing Your Privacy Level

  • Maximum Privacy: Use local models (--model granite-3b-code, --model ci)
  • Balanced Approach: Use cloud for public repos, local for sensitive code
  • Cloud Benefits: Latest AI capabilities, no local storage requirements

Troubleshooting

Common Issues

Model Loading Errors:

# Check model path
ls -la ./models/your-model/

# Verify dependencies
pip install patchpatrol[onnx] --upgrade

Permission Errors:

# Ensure Git repository access
git status

# Check file permissions
chmod +x ~/.local/bin/patchpatrol

Performance Issues:

# Reduce context size
patchpatrol review-changes --max-new-tokens 256

# Use CPU-optimized models
patchpatrol review-changes --device cpu

Debug Mode

patchpatrol --verbose review-changes --model ./models/debug-model

License

MIT License - see LICENSE file for details.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Support


Made with care for developers who value code quality

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

patchpatrol-0.2.1.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

patchpatrol-0.2.1-py3-none-any.whl (37.0 kB view details)

Uploaded Python 3

File details

Details for the file patchpatrol-0.2.1.tar.gz.

File metadata

  • Download URL: patchpatrol-0.2.1.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for patchpatrol-0.2.1.tar.gz
Algorithm Hash digest
SHA256 5cd85cf4095b6aa9a545d7e086b68d50cd6bd91618e3670d7ae1ac048281a8fa
MD5 e6b6ef48b715578040682d38d7d9b531
BLAKE2b-256 69fe9354b6cd38798871c9e15b4679065dc67de48aff9885f3baa240cbdf99f7

See more details on using hashes here.

Provenance

The following attestation bundles were made for patchpatrol-0.2.1.tar.gz:

Publisher: publish.yaml on 4383/patchpatrol

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file patchpatrol-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: patchpatrol-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 37.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for patchpatrol-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c699a2e38fdf00196f49fa2ec678d84f7cb2c459e899a0eb7df628bda252b6ec
MD5 7bc05a754b5432ad7a99184548386c5c
BLAKE2b-256 06384ce42fa638e699ffa6b6d235a868a89893968d437ab9a8d4029ac8a64df2

See more details on using hashes here.

Provenance

The following attestation bundles were made for patchpatrol-0.2.1-py3-none-any.whl:

Publisher: publish.yaml on 4383/patchpatrol

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page