Skip to main content

Production-grade AI code review agent with multi-language support

Project description

AI Code Review Agent

An intelligent code analysis system that automatically reviews source code using a combination of static analysis and AI-powered reasoning. The tool scans project files, detects bugs, security issues, duplicate code, and generates improvement suggestions before code is merged. The system helps developers improve code quality, reduce manual review effort, and identify potential problems early in the development process.

This project demonstrates how AI models can be integrated into developer tooling to assist with automated code reviews and improve software quality.

Features

AI-CODE-REVIEW-AGENT/
│
├── src/                           # Main application source code
│
│   ├── ai/                        # AI interaction layer
│   │   ├── llm_client.py          # Handles communication with the LLM API
│   │   ├── models.py              # AI model configuration
│   │   └── prompts.py             # Prompts used for AI code analysis
│
│   ├── analysis/                  # Static code analysis modules
│   │   ├── static_analysis.py     # Detects bugs and bad patterns
│   │   ├── duplicate_checker.py   # Detects duplicated code blocks
│   │   ├── security_checker.py    # Detects potential security issues
│   │   └── complexity_checker.py  # Detects overly complex code
│
│   ├── scanner/                   # Project scanning module
│   │   └── file_scanner.py        # Detects source files inside a project
│
│   ├── output/                    # Output generation
│   │   ├── report_writer.py       # Generates code review reports
│   │   └── patch_writer.py        # Generates suggested code fixes
│
│   ├── config/                    # Application configuration
│   │   └── settings.py            # Loads environment configuration
│
│   └── main.py                    # Main pipeline that orchestrates analysis
│
├── data/                          # Runtime generated data
│   ├── reports/                   # Generated code review reports
│   └── patches/                   # Suggested code patches
│
├── docker/                        # Docker configuration
│   └── Dockerfile                 # Container definition
│
├── docs/                        
│   └── architecture.md                 
│
├── k8s/                           # Kubernetes deployment configuration
│   └── base/
│       ├── deployment.yaml        # Kubernetes deployment
│       ├── cm.config.yaml         # ConfigMap configuration
│       ├── cm.env.yaml            # Environment configuration
│       ├── secret.yaml            # Secret configuration
│       ├── cronjob.yaml 
│       ├── job.yaml  
│       ├── secret.yaml  
│       ├── secret.template.yaml 
│       └── service.yaml   
│
├── tests/
│   └── test_ai_client.py
│   └── test_scanner.py
│   └── test_static_analysis.py
│
├── .env.example                   # Example environment variables
├── requirements.txt               # Python dependencies
├── .env
├── .dockerignore
├── AGENTS.md
├── azure-pipelines.yml
├── docker-compose.yml
├── .gitignore
└── README.md

System Components Overview

The AI Code Review Agent consists of several modules that work together to analyze code and generate improvement suggestions.

1. File Scanner

  • Purpose: Scans the project directory and collects source code files for analysis.
  • Features:
    • Detects supported programming language files
    • Ignores system folders such as .git, node_modules, and venv
    • Prepares file data for analysis

2. Static Code Analysis

  • Purpose: Performs rule-based analysis similar to tools like SonarQube.
  • Features:
    • Detects logical errors
    • Finds duplicate code
    • Identifies code smells
    • Detects unsafe coding patterns

3. AI Code Review Engine

  • Purpose: Uses a Large Language Model to understand code structure and logic.
  • Features:
    • Code understanding
    • Best practice recommendations
    • Refactoring suggestions
    • Logic evaluation

4. Fix Suggestion Generator

  • Purpose: Generates potential improvements and refactoring proposals for detected issues.
  • Features:
    • Code refactoring suggestions
    • Performance optimizations
    • Clean code improvements
  1. Report Generator
  • Purpose: Creates a structured report containing analysis results.
  • Report includes:
    • Code quality score
    • Detected issues
    • Suggested improvements
    • Summary of findings

Configuration

The AI Code Review Agent uses environment-based configuration. Configuration values are loaded from environment variables defined in a .env file for local development or through Kubernetes ConfigMaps and Secrets in containerized environments. Sensitive information such as API keys must not be stored in the repository. An example configuration is provided in:

.env.example

Key Configuration Areas

  1. AI Model Settings
  • Model API endpoint
  • Model name used for code analysis
  • API authentication key
  • Token limits and temperature settings
  1. Code Analysis Settings
  • Static analysis rules
  • Duplicate detection configuration
  • Security scan settings
  • Code complexity thresholds
  1. Output Configuration
  • Directory for generated reports
  • Directory for generated patches
  • Output format settings
  1. Logging
  • Log level configuration
  • Console logging mode
  • Debug output for development

Installation & Setup

  • Prerequisites:
    • Python 3.12+
    • Docker
    • Kubernetes
    • Access to an OpenAI-compatible LLM endpoint

Development Setup

# Clone the repository
git clone <repository-url>
cd AI-CODE-REVIEW-AGENT

# Create and activate virtual environment
python -m venv .venv

# macOS / Linux
source .venv/bin/activate

# Windows
.\.venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Configure environment variables
cp .env.example .env

# Edit .env with your configuration values

Run the application:

python src/main.py --project <PATH_TO_TARGET_PROJECT>

Example:

python src/main.py --project ./example-project

Container Deployment

Build the Docker image:

docker build -t ai-code-review-agent -f docker/Dockerfile .

Run the container:

docker run --rm \
-v $(pwd)/data:/app/data \
-v <PATH_TO_TARGET_PROJECT>:/workspace \
--env-file .env \
ai-code-review-agent \
python src/main.py --project /workspace

Kubernetes Deployment

  • Kubernetes manifests are located in:
k8s/base/
  • Deploy the application:
kubectl apply -f k8s/base/
  • Check deployment status:
kubectl get pods
kubectl logs deployment/ai-code-review-agent

Configuration values are provided via:

  • ConfigMaps → non-sensitive settings
  • Secrets → API keys and sensitive credentials

Architecture

The system follows a structured pipeline:

  1. File Scanner
  2. Static Analysis
  3. AI Review
  4. Fix Generator
  5. Report Generation

Code Review Processing Flow

┌─────────────────────────────┐
│ User Project Input
│ • Open Project Folder
│ • Start AI Review Command
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Project Scanner
│ • Detect Source Files
│ • Ignore System Folders
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Code Loader
│ • Read Source Code
│ • Prepare File Data
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Validation Process
│ • Check File Format
│ • Error Handling
│ • File Accessibility
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Static Code Analysis
│ • Bug Detection
│ • Security Check
│ • Duplicate Code Detection
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ AI Processing Agent
│ • Code Understanding
│ • Logic Analysis
│ • Best Practice Review
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Issues Detection
│ • Bugs
│ • Code Smells
│ • Performance Problems
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Fix Suggestion Engine
│ • Refactoring Proposal
│ • Improved Code Version
│ • Optimization Suggestions
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ User Approval
│ • Apply Fix
│ • Ignore Suggestion
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Report Generation
│ • Quality Score
│ • Issue Summary
│ • Review Report Creation
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Review Result Output
│ • Terminal Output
│ • Report File Saved
└───────────────┬─────────────┘
                │
                ▼
┌─────────────────────────────┐
│ Process Complete
│ • Code Quality Improved
└─────────────────────────────┘

Process Steps

  1. Project Input
  • The user selects a project directory and starts the AI review process.
  1. Project Scanning
  • The system scans the project recursively and collects supported source files.
  • System folders such as .git, node_modules, and venv are ignored.
  1. Code Loading
  • Source files are safely loaded and prepared for analysis.
  1. Static Code Analysis
  • Rule-based checks detect common bugs, code smells, security issues, and code complexity problems.
  1. Duplicate Detection
  • The system identifies duplicated code blocks across files.
  1. AI Code Review
  • Relevant code snippets are sent to the LLM for deeper analysis.
  • The model evaluates logic, structure, and best practices.
  1. Fix Suggestion Generation
  • The system generates suggested improvements and refactoring proposals.
  1. Report & Patch Generation
  • A final report is generated and stored in data/reports/.
  • Suggested patches are written to data/patches/.
  1. Completion
  • The system outputs a summary in the terminal and finishes the review process.

Example Output

Example terminal output after running the review:

Starting AI Code Review...
Scanning project...
Files detected: 12
Running static analysis...
Issues found: 3
Running AI review...
Generating suggestions...
Review completed.
Report saved to: data/reports/
Patches saved to: data/patches/

Support

For questions about the system architecture, configuration, or deployment, please contact the development team or refer to the project documentation. (eghibner@grenke.de)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_code_review_agent-1.0.1.tar.gz (23.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_code_review_agent-1.0.1-py3-none-any.whl (31.3 kB view details)

Uploaded Python 3

File details

Details for the file ai_code_review_agent-1.0.1.tar.gz.

File metadata

  • Download URL: ai_code_review_agent-1.0.1.tar.gz
  • Upload date:
  • Size: 23.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for ai_code_review_agent-1.0.1.tar.gz
Algorithm Hash digest
SHA256 c7ca53e8b0706e1731b074efb482f65324a77ed5e377665273a1a96dea963e30
MD5 677970cf1ce6eb77a67d93df294e0400
BLAKE2b-256 47b951f650c0617e8ddbc1451235b643f497e5140492c78af23aac6b6e37bd2c

See more details on using hashes here.

File details

Details for the file ai_code_review_agent-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_code_review_agent-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bccf1528459e108f56bcec29e44ab2dfb88baa7444c99da5362173be717a3e89
MD5 c1e744db4c1f73ebb35091fe77abe30b
BLAKE2b-256 4369f55ccd22cdd597718b38f55a5bf2504aceb12c6f0f379764fd1e87fefd47

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page