AI code review tool with 10 parallel agents. Security scanning, performance analysis, and logic review. Supports MiMo, OpenAI, DeepSeek, Qwen, GLM, Kimi, Anthropic.
Project description
RevHive
AI-Powered Multi-Agent Code Review & Security Scanning System
RevHive deploys 10 specialized AI agents — 9 reviewing in parallel, 1 synthesizing results — to catch security vulnerabilities, performance bottlenecks, logic bugs, and style issues before they reach production.
- Structured Output — Agents return structured JSON via Pydantic schemas, with regex fallback for unsupported LLMs
- Semantic Deduplication — Title matching + keyword Jaccard similarity prevents duplicate findings across agents
- LLM Conflict Resolution — Coordinator uses AI to resolve contradictory assessments between agents
Risk Score
Every review outputs a risk score (0-100) so you know at a glance whether it's safe to merge:
| Score | Level | Meaning |
|---|---|---|
| 0-20 | ✅ LOW | Safe to merge |
| 21-50 | ⚠️ MEDIUM | Review recommended before merge |
| 51-80 | 🔴 HIGH | Fix before merge |
| 81-100 | 🚨 CRITICAL | Do not merge |
Example output:
🚨 Risk Score: CRITICAL (92/100)
1 Critical · 1 High · 8 Medium · 12 Low
Why RevHive?
| Pain Point | RevHive Solution |
|---|---|
| Manual CR takes 1-2 hours/day | 9 agents review in parallel in under 30 seconds |
| Human reviewers miss subtle bugs | Each agent is a domain expert (security, perf, logic...) |
| "LGTM" culture devalues review | Every PR gets a thorough, objective audit |
| No team-wide quality visibility | Trend analysis tracks code health over time |
RevHive vs Others
| Feature | RevHive | CodeRabbit | Sourcery | SonarQube | Copilot Review |
|---|---|---|---|---|---|
| AI-driven review | ✅ | ✅ | ✅ | ❌ | ✅ |
| Multi-agent parallel | ✅ 10 | ❌ | ❌ | ❌ | ❌ |
| Chinese LLM support | ✅ 5 providers | ❌ | ❌ | ❌ | ❌ |
| Risk score (0-100) | ✅ | ✅ | ❌ | ✅ | ❌ |
| CLI local-first | ✅ | ❌ | ❌ | ❌ | ❌ |
| Demo mode (no API key) | ✅ | ❌ | ❌ | N/A | ❌ |
| PR inline comments | 🔜 | ✅ | ✅ | ✅ | ✅ |
| Quality gate (status check) | 🔜 | ❌ | ❌ | ✅ | ❌ |
| IDE integration | 🔜 | ❌ | ✅ | ✅ | ✅ |
| Open source | ✅ BSL | Partial | ❌ | ✅ | ❌ |
| Self-hosted | ✅ | ❌ | ❌ | ✅ | ❌ |
🔜 = Coming soon
Architecture
┌─────────────┐
│ Coordinator │ ← Synthesizes findings, resolves conflicts
└──────┬──────┘
│ collects results from 9 parallel agents
▼
Style Security Perf Logic Repo Refactor Fix Test Doc
ConversationReviewer runs on-demand multi-turn deep review of individual findings — challenging assumptions, exploring alternative fixes, and testing edge cases through 5 rounds of dialogue. Used for critical/high severity findings in batch mode.
All 10 Agents
| Agent | Role |
|---|---|
| StyleAgent | Naming conventions, formatting, documentation |
| SecurityAgent | SQL injection, XSS, secrets, weak crypto, auth flaws |
| PerformanceAgent | N+1 queries, memory leaks, algorithmic complexity |
| LogicAgent | Edge cases, error handling, race conditions, type safety |
| RepoAgent | Architecture review, cross-file dependencies, tech debt |
| RefactorAgent | Design patterns, code transformation, incremental migration |
| FixAgent | Generates complete corrected code with root cause analysis |
| TestAgent | Unit tests, edge case tests, security regression tests |
| DocAgent | API docs, architecture docs, usage examples |
| Coordinator | Deduplicates (semantic), resolves conflicts via LLM, calculates risk score, generates report |
Quick Start
Option A: CLI (30 seconds)
pip install revhive-ai
revhive demo # no API key needed
export LLM_API_KEY=your-api-key
revhive review --file src/main.py # real review
Option B: Docker
docker build -t revhive .
docker run --rm -e LLM_API_KEY=your-api-key -v $(pwd):/code revhive review --file /code/src/main.py
Option C: GitHub App (automatic PR reviews)
Install the GitHub App → every PR gets reviewed automatically, no CLI needed.
Demo Mode
RevHive ships with a fully functional demo mode that runs the complete multi-agent pipeline with mock responses. No API key, no network, no cost — perfect for evaluation.
python examples/sample_review.py
This produces a realistic review report identical in structure to a live MiMo-backed run, including:
- 20+ simulated findings across all 9 review agents
- Severity-ordered report (CRITICAL / HIGH / MEDIUM / LOW)
- Markdown and JSON output formats
Supported LLM Backends
| Provider | Model | Setup |
|---|---|---|
| MiMo (Xiaomi) | mimo-v2.5-pro |
LLM_BASE_URL=https://api.xiaomimimo.com/v1 |
| OpenAI | gpt-4o |
LLM_BASE_URL=https://api.openai.com/v1 |
| DeepSeek | deepseek-chat |
LLM_BASE_URL=https://api.deepseek.com/v1 |
| Qwen (Alibaba) | qwen-plus |
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 |
| GLM (Zhipu) | glm-4 |
LLM_BASE_URL=https://open.bigmodel.cn/api/paas/v4 |
| Kimi | kimi |
LLM_BASE_URL=https://api.moonshot.cn/v1 |
| Anthropic | claude-sonnet-4-20250514 |
pip install -e ".[anthropic]", set ANTHROPIC_API_KEY |
Quick preset: Set LLM_MODEL to a preset name (e.g., openai, deepseek, qwen) and RevHive auto-configures the base URL. Explicit LLM_BASE_URL takes priority.
MiMo is the default and recommended backend. RevHive is optimized for MiMo's token economics and model capabilities.
Supported Languages
RevHive's LLM-powered agents can review code in any language. Currently optimized for:
| Language | Extensions | Security Patterns | Performance Patterns |
|---|---|---|---|
| Python | .py | ✅ Full | ✅ Full |
| JavaScript/TypeScript | .js .jsx .mjs .ts .tsx | ✅ Full | ✅ Full |
| Go | .go | ✅ Full | ✅ Full |
| Rust | .rs | ✅ Full | ✅ Full |
| Java | .java | ✅ Full | ✅ Full |
| C/C++ | .c .cpp .h .hpp | ✅ Core | ⚠️ Basic |
| Ruby | .rb | ✅ Core | ⚠️ Basic |
| PHP | .php | ✅ Full | ⚠️ Basic |
| Swift | .swift | ✅ Core | ⚠️ Basic |
| Kotlin | .kt | ✅ Core | ⚠️ Basic |
Other languages are supported via LLM understanding but may have fewer specialized patterns.
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
LLM_API_KEY |
Yes | — | API key for the LLM provider |
LLM_BASE_URL |
No | https://api.xiaomimimo.com/v1 |
LLM API endpoint |
LLM_MODEL |
No | mimo-v2.5-pro |
Model name |
Configuration
Create .revhive.yml in your project root:
model: mimo-v2.5-pro
agents:
style:
enabled: true
security:
enabled: true
severity_threshold: medium # only report medium and above
performance:
enabled: true
logic:
enabled: true
repo:
enabled: true
refactor:
enabled: true
fix:
enabled: true
test:
enabled: true
doc:
enabled: false # disable documentation agent
ignore: # glob patterns — ** matches any depth
- "*.min.js"
- "*.min.css"
- "vendor/**"
- "node_modules/**"
- "migrations/**"
- "__pycache__/**"
- ".git/**"
- ".venv/**"
GitHub App Integration
Install the GitHub App for automatic PR reviews. Every PR gets a detailed review report — no CLI needed.
# .github/workflows/code-review.yml
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
review:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install revhive-ai
- name: Run RevHive Review
env:
LLM_API_KEY: ${{ secrets.MIMO_API_KEY }}
LLM_BASE_URL: https://api.xiaomimimo.com/v1
LLM_MODEL: mimo-v2.5-pro
run: |
revhive review --diff HEAD~1 --format markdown --output review_report.md
- name: Post Review Comment
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('review_report.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});
Project Structure
src/revhive/
agents/ # 10 specialized review agents
graph/ # LangGraph workflow orchestration
utils/ # Utility modules
team/ # Batch processing engine
analysis/ # Historical trend analysis
demo.py # Demo mode (no API key required)
main.py # CLI entry point
tests/ # 37 tests covering agents, workflow, demo
examples/ # Ready-to-run examples
Security
RevHive takes its own security seriously:
- Dependency scanning —
pip-auditruns in CI on every push and PR to catch known CVEs in dependencies. - Static analysis —
banditscans the source code for common security issues (hardcoded secrets, unsafe deserialization, injection risks). - Docker hardening — the container runs as a non-root user (
appuser). Sensitive files (.env,*.pem,.git/) are excluded via.dockerignore.
To run security checks locally:
pip install pip-audit bandit
pip-audit
bandit -r src/ -ll --skip B101
Contributing
See CONTRIBUTING.md. All contributions welcome!
License
BSL 1.1 — see LICENSE. Converts to Apache 2.0 on 2030-05-12.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file revhive_ai-0.3.4.tar.gz.
File metadata
- Download URL: revhive_ai-0.3.4.tar.gz
- Upload date:
- Size: 48.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8349b636f864a3db1879676fd454314b71158d4e8f75f5908233065aa9ffa8da
|
|
| MD5 |
b0007cea65a9b2b26d3e15b7d6465ca1
|
|
| BLAKE2b-256 |
06d9e2b32778f81f678f0e989dcb670fd0b02a49323721400f9dc98debb35fdc
|
File details
Details for the file revhive_ai-0.3.4-py3-none-any.whl.
File metadata
- Download URL: revhive_ai-0.3.4-py3-none-any.whl
- Upload date:
- Size: 54.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
107913b84bf9ea23b0e6473fb43e2ef5ffe1939dd396daf7f9a830317205f8ec
|
|
| MD5 |
a3de2460e6ab57a48d1a4e1c13ee9382
|
|
| BLAKE2b-256 |
3aef11da1a25ec32c74cd1597976acd8b1820dbb84c7a2be4522dd6df674f8fa
|