AI-powered code review for GitHub & GitLab. Works with any LLM: GPT-4o, Claude, Llama, Gemini, Ollama. Free, open source.
Project description
AI Code Reviewer
AI-powered code review for GitHub pull requests.
One-line setup. Zero config. Works with any LLM.
Used by [X] developers · [Y] reviews completed · $0.002/review with Groq
Quick Start · /fix · Playground · Pre-Commit · Providers · /review · GitLab · VS Code · Self-Hosted
https://github.com/user-attachments/assets/demo-placeholder
Open a PR. Get AI review in 30 seconds. Inline comments with suggested fixes.
Why teams choose AI Code Reviewer
| AI Code Reviewer | CodeRabbit | GitHub Copilot | PR-Agent | |
|---|---|---|---|---|
| Pricing | Free (bring your key) | $19/user/mo | $19/user/mo | Free (self-host) |
| LLM choice | Any (GPT, Claude, Llama, Gemini, Ollama) | Fixed | Fixed | GPT-4 only |
| Setup | 30 seconds | 5 minutes | Built-in | 15 minutes |
On-demand /review |
Yes | Yes | No | Yes |
| Custom instructions | Yes .pr-reviewer.yml |
Yes | No | Yes |
| Suggested code blocks | Yes (one-click apply) | Yes | No | Yes |
| Cost estimation | Yes (pre-review) | No | N/A | No |
| 100% local option | Yes (Ollama) | No | No | No |
| Multi-language reviews | Yes (9 languages) | Yes | No | No |
Auto-fix /fix |
Yes (commits fixes to PR) | No | No | No |
| Web playground | Yes (try without install) | No | No | No |
| Pre-commit hook | Yes (block before PR) | No | No | No |
| VS Code extension | Yes | No | Built-in | No |
| GitLab CI support | Yes (native) | No | No | Yes |
| Self-hosted + RAG | Yes (agentic, AST-indexed) | No | No | Yes |
| Retry with backoff | Yes (all providers) | Unknown | N/A | No |
| Webhook idempotency | Yes (SHA dedup) | Unknown | N/A | No |
| Open source | MIT | No | No | Apache-2.0 |
Review summary
Inline comments with suggested fixes
Quick Start
1. Add the workflow (30 seconds)
Create .github/workflows/ai-review.yml:
name: AI Code Review
on:
pull_request:
types: [opened, synchronize, reopened]
permissions:
contents: read
pull-requests: write
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: mara-werils/ai-code-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
2. Add your API key
Go to Settings > Secrets > Actions and add your API key.
3. Open a PR
That's it. AI review lands in 30 seconds.
Want the cheapest option? Use Groq with Llama 3.3 — it's $0.002/review with a free API key.
Auto-Fix with /fix
Type /fix in any PR comment after a review — AI reads the review comments, generates fixes, and commits them directly to your branch.
You: /fix
AI: 👀 Analyzing 3 issues from the review...
AI: ## AI Code Fix
Applied 3 fixes across 2 files.
### src/db.py
- Fixed SQL injection by using parameterized query
- Added input validation for user_id
Commit: abc1234
### src/api.py
- Added null check before accessing response.data
Commit: def5678
How it works:
- AI collects all
[CRITICAL]and[WARNING]comments from the last review - For each file: reads the current content, generates fixed version via LLM
- Commits each fix directly to the PR branch
- Posts a summary of what was fixed and what was skipped
Requires
contents: writepermission. Add/fixsupport with the on-demand workflow.
Try It Online (No Install)
Paste any public GitHub PR URL and get an AI review instantly:
# Self-host the playground
git clone https://github.com/mara-werils/ai-code-reviewer.git
cd ai-code-reviewer
pip install . uvicorn starlette
export GROQ_API_KEY=gsk_...
export GITHUB_TOKEN=ghp_...
uvicorn playground.app:app --port 8000
# Or with Docker
docker build -t pr-reviewer-playground playground/
docker run -p 8000:8000 -e GROQ_API_KEY=gsk_... -e GITHUB_TOKEN=ghp_... pr-reviewer-playground
Open http://localhost:8000 — paste a PR URL, get a review. Share the link with your team.
Pre-Commit Hook
Review code before every commit. Catches issues before they reach the PR.
# .pre-commit-config.yaml
repos:
- repo: https://github.com/mara-werils/ai-code-reviewer
rev: v0.4.0
hooks:
- id: ai-code-review
# Install and run
pip install pre-commit
pre-commit install
# Now every git commit runs AI review on staged changes
git commit -m "feat: add new endpoint"
# AI Code Review: scanning 3 files (42 lines changed)...
# CRITICAL src/api.py:15
# SQL injection vulnerability: user input passed directly to query
# Commit blocked: 1 issue(s) at or above threshold.
Configuration
| Env Variable | Default | Description |
|---|---|---|
PROVIDER |
auto-detect | LLM provider |
SEVERITY_THRESHOLD |
critical |
Block on: critical, warning, suggestion |
MAX_COMMENTS |
10 |
Max review comments |
REVIEW_STYLE |
concise |
Review depth |
Set
SEVERITY_THRESHOLD=warningto also block on warnings. Use--no-verifyto skip in a hurry.
On-Demand Review
Type /review in any PR comment to trigger a review on demand.
# Add to your workflow to enable /review command
on:
pull_request:
types: [opened, synchronize, reopened]
issue_comment:
types: [created]
jobs:
auto-review:
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: mara-werils/ai-code-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
on-demand:
if: >
github.event_name == 'issue_comment' &&
github.event.issue.pull_request &&
startsWith(github.event.comment.body, '/review')
runs-on: ubuntu-latest
steps:
- name: React to comment
uses: actions/github-script@v7
with:
script: |
await github.rest.reactions.createForIssueComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: context.payload.comment.id,
content: 'eyes'
});
- name: Get PR ref
id: pr
uses: actions/github-script@v7
with:
script: |
const pr = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number
});
core.setOutput('head_ref', pr.data.head.ref);
- uses: actions/checkout@v4
with:
ref: ${{ steps.pr.outputs.head_ref }}
- uses: mara-werils/ai-code-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
See the full example: examples/on-demand-review.yml
Supported Providers
Use any LLM. Switch providers with one line.
| Provider | Model | Cost/review* | Setup |
|---|---|---|---|
| Groq | Llama 3.3 70B | ~$0.002 | GROQ_API_KEY (free tier) |
| Gemini 2.0 Flash | ~$0.003 | GOOGLE_API_KEY |
|
| OpenAI | GPT-4o | ~$0.05 | OPENAI_API_KEY |
| Anthropic | Claude Sonnet | ~$0.08 | ANTHROPIC_API_KEY |
| Ollama | Any local model | $0.00 | Setup guide |
| Azure OpenAI | GPT-4o | ~$0.05 | AZURE_OPENAI_API_KEY + API_BASE_URL |
| Any OpenAI-compatible | Any | Varies | OPENAI_API_KEY + API_BASE_URL |
*Estimated for a ~200 line PR.
Provider examples
Groq (Llama 3.3 — nearly free, recommended to start)
- uses: mara-werils/ai-code-reviewer@v1
with:
provider: 'groq'
env:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
Get a free API key at console.groq.com.
OpenAI (GPT-4o)
- uses: mara-werils/ai-code-reviewer@v1
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
Anthropic (Claude)
- uses: mara-werils/ai-code-reviewer@v1
with:
provider: 'anthropic'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
Google (Gemini)
- uses: mara-werils/ai-code-reviewer@v1
with:
provider: 'google'
env:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
Ollama (100% local, free)
- uses: mara-werils/ai-code-reviewer@v1
with:
provider: 'ollama'
model: 'llama3.1:8b'
api_base_url: 'http://your-server:11434/v1'
Any OpenAI-compatible API (LiteLLM, vLLM, etc.)
- uses: mara-werils/ai-code-reviewer@v1
with:
api_base_url: 'https://your-api.example.com/v1'
model: 'your-model'
env:
OPENAI_API_KEY: ${{ secrets.YOUR_API_KEY }}
GitLab CI Integration
Works with GitLab merge requests via the CLI. Add to your .gitlab-ci.yml:
ai-code-review:
stage: test
image: python:3.11-slim
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
script:
- pip install --quiet pr-reviewer
- pr-reviewer review
--platform gitlab
--repo "$CI_PROJECT_PATH"
--mr "$CI_MERGE_REQUEST_IID"
--post
allow_failure: true
Required CI/CD variables: GITLAB_TOKEN (api scope) + your LLM API key (OPENAI_API_KEY, GROQ_API_KEY, etc.)
Works with self-hosted GitLab too — set
GITLAB_URLto your instance URL.
See the full template: examples/gitlab-ci.yml
Configuration
Action inputs
- uses: mara-werils/ai-code-reviewer@v1
with:
# LLM provider (openai, anthropic, groq, google, ollama)
provider: 'openai'
# Specific model (auto-selected per provider if empty)
model: ''
# Review style: concise, thorough, minimal
review_style: 'concise'
# Max comments per review (1-50)
max_comments: '15'
# Review comment language (en, zh, ja, ko, es, de, fr, ru, pt)
language: 'en'
# Custom instructions for your team
custom_instructions: |
- We use the Repository pattern
- Flag any direct SQL queries in controllers
- We handle PII — security is critical
# Auto-label PRs (bugfix, feature, refactor, etc.)
label_pr: 'false'
# Suggest missing tests
suggest_tests: 'true'
Per-repo config (.pr-reviewer.yml)
Drop this in your repo root for persistent config:
# .pr-reviewer.yml
review_style: thorough
max_comments: 20
custom_instructions: |
- Our API uses FastAPI with Pydantic v2
- All new endpoints must have OpenAPI docs
- We're migrating from callbacks to async/await
ignore_paths:
- "*.lock"
- "generated/**"
- "__snapshots__/**"
ignore_titles:
- "WIP"
- "DO NOT MERGE"
CLI
Review PRs locally or in any CI:
pip install pr-reviewer
# Review a GitHub PR
export GITHUB_TOKEN=ghp_...
export OPENAI_API_KEY=sk-...
pr-reviewer review --repo owner/name --pr 42
# Review with a specific provider
pr-reviewer review --repo owner/name --pr 42 --provider groq
# Review a local diff
git diff main..HEAD > changes.diff
pr-reviewer review --diff changes.diff
# Review and post back to GitHub
pr-reviewer review --repo owner/name --pr 42 --post
VS Code Extension
Review code directly in your editor — no PR needed.
Cmd+Shift+R → Review current file
Cmd+Shift+D → Review uncommitted changes
Right-click → Review selection
Install
Search "AI Code Reviewer" in the VS Code Extensions marketplace, or:
cd vscode-extension
npm install && npm run compile
# Then: Cmd+Shift+P → "Developer: Install Extension from Location..." → select vscode-extension/
Configure
Open Settings → search "AI Code Reviewer":
- Provider: openai, anthropic, groq, google, ollama
- API Key: your key (or use env vars)
- Review Style: concise, thorough, minimal
Issues appear in the Problems panel with severity levels (Error, Warning, Info, Hint).
Works with the same providers as the GitHub Action. $0.002/review with Groq, $0.00 with Ollama.
See the full docs: vscode-extension/README.md
Self-Hosted Mode
For teams needing full control, RAG-powered codebase understanding, and persistent analytics.
git clone https://github.com/mara-werils/ai-code-reviewer.git
cd ai-code-reviewer
cp .env.example .env # Edit with your keys
docker-compose up -d
What you get
| Feature | Description |
|---|---|
| Agentic review | LangGraph agent with 7 tools to investigate the codebase |
| AST-based indexing | Understands functions, classes, imports (Python AST + Tree-sitter for JS/TS/Go) |
| Hybrid retrieval | Semantic vector search + identifier matching + file neighborhood + RRF fusion |
| Cost tracking | Per-repository budgets and analytics dashboard |
| Feedback loop | Learns from developer reactions to comments |
| SSE streaming | Real-time review progress |
| Webhook idempotency | SHA-based deduplication prevents duplicate reviews |
| Retry + backoff | Handles rate limits, timeouts, and transient failures automatically |
Architecture
graph LR
A[GitHub Webhook] --> B[FastAPI]
B --> C[Redis Queue]
C --> D[Review Worker]
D --> E[Classify PR]
E --> F[Retrieve Context]
F --> G[LangGraph Agent]
G --> H{Tools}
H --> I[search_codebase]
H --> J[find_usages]
H --> K[check_tests]
H --> L[read_file]
G --> M[Post Review]
N[Indexer] --> O[(PostgreSQL + pgvector)]
F --> O
FAQ
Is it free?
The tool itself is 100% free and open source (MIT). You pay only for LLM API calls. With Groq's free tier or Ollama, the total cost is $0.
Is my code sent to third parties?
Your code is sent to whichever LLM provider you choose. If you need full privacy, use Ollama with a local model — nothing leaves your network.
Does it work with private repos?
Yes. The GitHub Action uses your repository's built-in GITHUB_TOKEN, which has access to private repos.
Can I use it with GitLab / Bitbucket?
GitLab is fully supported! Use the CLI in your .gitlab-ci.yml with --platform gitlab. See the GitLab CI Integration section. Bitbucket support is on the roadmap.
How do I avoid noisy reviews?
- Use
review_style: minimalfor less verbose reviews - Set
severity_threshold: warningto skip info-level comments - Add
custom_instructionsto teach it your team's conventions - Use
ignore_pathsto skip generated files
Can I review in Chinese / Japanese / Korean / Spanish?
Yes! Set language: 'zh' (or ja, ko, es, de, fr, ru, pt).
How is this different from PR-Agent?
- Multi-LLM: Works with any LLM, not just GPT-4. Switch with one line.
- Simpler setup: One workflow file, no config needed.
- Cheaper: Groq at $0.002/review vs GPT-4 at ~$0.10/review.
- Cost estimation: Know the cost before running a review.
- Better reliability: Retry with exponential backoff, webhook idempotency.
Badge
Show that your project uses AI code reviews:
[](https://github.com/mara-werils/ai-code-reviewer)
Roadmap
- Multi-LLM support (GPT, Claude, Llama, Gemini, Ollama)
- Inline comments with suggested fixes
- On-demand
/reviewcommand - Self-hosted mode with RAG
- CLI tool
- Cost estimation
- GitLab integration
- Bitbucket integration
- PR chat — ask questions about the PR
- Auto-fix
/fix— AI commits fixes directly to your PR branch - Learning from feedback
- VS Code extension
- JetBrains plugin
- Slack/Discord notifications
Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
git clone https://github.com/mara-werils/ai-code-reviewer.git
cd ai-code-reviewer
pip install -e ".[dev]"
pytest tests/ -v
ruff check .
License
MIT — use it however you want.
If this saves you time, give it a star. It helps others find the project.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pr_reviewer-0.4.0.tar.gz.
File metadata
- Download URL: pr_reviewer-0.4.0.tar.gz
- Upload date:
- Size: 25.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4ae10dfbad5b1c426256976ba770893bca724d37389ac60b2b4306af8c9acc78
|
|
| MD5 |
792901d2d7297d46b72fb61e76c7844d
|
|
| BLAKE2b-256 |
75daf81851815fbc90f5f8d6a51c6409c0e7de6da1e4fb6939f1dd56e92f9dab
|
File details
Details for the file pr_reviewer-0.4.0-py3-none-any.whl.
File metadata
- Download URL: pr_reviewer-0.4.0-py3-none-any.whl
- Upload date:
- Size: 88.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1c725be635cdf7249b01658a63cbe5ab9b017e2a9e4edceacb8fc8c4371e753d
|
|
| MD5 |
3d4e8d06aebae26ab8357761de12a6db
|
|
| BLAKE2b-256 |
400e8b019bec1d4c97be702901fdba61aa06f2460673e509d2d46df761ba0648
|