Skip to main content

AI-powered code review tool for GitHub, GitLab, Bitbucket Cloud, Bitbucket Server, Azure DevOps and Gitea — built with LLMs like OpenAI, Claude, Gemini, Ollama, Bedrock, OpenRouter and Azure OpenAI

Project description

AI Review

AI-powered code review tool.

CI codecov PyPI version License GitHub stars

Made with ❤️ by @NikitaFilonov


📑 Table of Contents


✨ About

AI Review is a developer tool that brings AI-powered code review directly into your workflow. It helps teams improve code quality, enforce consistency, and speed up the review process.

✨ Key features:

  • Multiple LLM providers — choose between OpenAI, Claude, Gemini, Ollama, Bedrock, OpenRouter, or Azure OpenAI and switch anytime.
  • VCS integration — works out of the box with GitLab, GitHub, Bitbucket Cloud, Bitbucket Server, Azure DevOps, and Gitea.
  • Customizable prompts — adapt inline, context, and summary reviews to match your team’s coding guidelines.
  • Reply modes — AI can now participate in existing review threads, adding follow-up replies in both inline and summary discussions.
  • Flexible configuration — supports YAML, JSON, and ENV, with seamless overrides in CI/CD pipelines.
  • AI Review runs fully client-side — it never proxies or inspects your requests.

AI Review runs automatically in your CI/CD pipeline and posts both inline comments, summary reviews, and now AI-generated replies directly inside your merge requests. This makes reviews faster, more conversational, and still fully under human control.


🧪 Live Preview

Curious how AI Review works in practice? Here are three real Pull Requests reviewed entirely by the tool — one per mode:

Mode Description 🐙 GitHub 🦊 GitLab 🪣 Bitbucket
🧩 Inline Adds line-by-line comments directly in the diff. Focuses on specific code changes. View on GitHub View on GitLab View on Bitbucket
🧠 Context Performs a broader analysis across multiple files, detecting cross-file issues and inconsistencies. View on GitHub View on GitLab View on Bitbucket
📄 Summary Posts a concise high-level summary with key highlights, strengths, and major issues. View on GitHub View on GitLab View on Bitbucket
💬 Inline Reply Generates a context-aware reply to an existing inline comment thread. Can clarify decisions, propose fixes, or provide code suggestions. View on GitHub View on GitLab View on Bitbucket
💬 Summary Reply Continues the summary-level review discussion, responding to reviewer comments with clarifications, rationale, or actionable next steps. View on GitHub View on GitLab View on Bitbucket

👉 Each review was generated automatically via GitHub Actions using the corresponding mode:

ai-review run-inline
ai-review run-summary
ai-review run-context
ai-review run-inline-reply
ai-review run-summary-reply

🚀 Quick Start

Install via pip:

pip install xai-review

📦 Available on PyPI


Or run directly via Docker:

docker run --rm -v $(pwd):/app nikitafilonov/ai-review:latest ai-review run-summary

🐳 Pull from DockerHub

👉 Before running, create a basic configuration file .ai-review.yaml in the root of your project:

llm:
  provider: OPENAI

  meta:
    model: gpt-4o-mini
    max_tokens: 1200
    temperature: 0.3

  http_client:
    timeout: 120
    api_url: https://api.openai.com/v1
    api_token: ${OPENAI_API_KEY}

vcs:
  provider: GITLAB

  pipeline:
    project_id: "1"
    merge_request_id: "100"

  http_client:
    timeout: 120
    api_url: https://gitlab.com
    api_token: ${GITLAB_API_TOKEN}

👉 This will:

  • Run AI Review against your codebase.
  • Generate inline and/or summary comments (depending on the selected mode).
  • Use your chosen LLM provider (OpenAI GPT-4o-mini in this example).

Note: Running ai-review run executes the full review (inline + summary). To run only one mode, use the dedicated subcommands:

  • ai-review run-inline
  • ai-review run-context
  • ai-review run-summary
  • ai-review run-inline-reply
  • ai-review run-summary-reply

AI Review can be configured via .ai-review.yaml, .ai-review.json, or .env. See ./docs/configs for complete, ready-to-use examples.

Key things you can customize:

  • LLM provider — OpenAI, Gemini, Claude, Ollama, Bedrock, OpenRouter, or Azure OpenAI
  • Model settings — model name, temperature, max tokens
  • VCS integration — works out of the box with GitLab, GitHub, Bitbucket Cloud, Bitbucket Server, Azure DevOps, and Gitea
  • Review policy — which files to include/exclude, review modes
  • Prompts — inline/context/summary prompt templates

👉 Minimal configuration is enough to get started. Use the full reference configs if you want fine-grained control ( timeouts, artifacts, logging, etc.).


⚙️ CI/CD Integration

AI Review works out-of-the-box with major CI providers. Use these snippets to run AI Review automatically on Pull/Merge Requests.
Each integration uses environment variables for LLM and VCS configuration.

For full configuration details (timeouts, artifacts, logging, prompt overrides), see ./docs/configs.

🚀 GitHub Actions

Add a workflow like this (manual trigger from Actions tab):

name: AI Review

on:
  workflow_dispatch:
    inputs:
      review-command:
        type: choice
        default: run
        options:
          - run
          - run-inline
          - run-context
          - run-summary
          - run-inline-reply
          - run-summary-reply
          - clear-inline
          - clear-summary
      pull-request-number:
        type: string
        required: true
jobs:
  ai-review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v6
        with:
          fetch-depth: 0

      - uses: Nikita-Filonov/ai-review@v0.56.0
        with:
          review-command: ${{ inputs.review-command }}
        env:
          # --- LLM configuration ---
          LLM__PROVIDER: "OPENAI"
          LLM__META__MODEL: "gpt-4o-mini"
          LLM__META__MAX_TOKENS: "15000"
          LLM__META__TEMPERATURE: "0.3"
          LLM__HTTP_CLIENT__API_URL: "https://api.openai.com/v1"
          LLM__HTTP_CLIENT__API_TOKEN: ${{ secrets.OPENAI_API_KEY }}

          # --- GitHub integration ---
          VCS__PROVIDER: "GITHUB"
          VCS__PIPELINE__OWNER: ${{ github.repository_owner }}
          VCS__PIPELINE__REPO: ${{ github.event.repository.name }}
          VCS__PIPELINE__PULL_NUMBER: ${{ inputs.pull-request-number }}
          VCS__HTTP_CLIENT__API_URL: "https://api.github.com"
          VCS__HTTP_CLIENT__API_TOKEN: ${{ secrets.GITHUB_TOKEN }}

🔗 Full example: ./docs/ci/github.yaml

🚀 GitLab CI/CD

For GitLab users:

ai-review:
  when: manual
  stage: review
  image: nikitafilonov/ai-review:latest
  rules:
    - if: '$CI_MERGE_REQUEST_IID'
  script:
    - ai-review run
  variables:
    # --- LLM configuration ---
    LLM__PROVIDER: "OPENAI"
    LLM__META__MODEL: "gpt-4o-mini"
    LLM__META__MAX_TOKENS: "15000"
    LLM__META__TEMPERATURE: "0.3"
    LLM__HTTP_CLIENT__API_URL: "https://api.openai.com/v1"
    LLM__HTTP_CLIENT__API_TOKEN: "$OPENAI_API_KEY"

    # --- GitLab integration ---
    VCS__PROVIDER: "GITLAB"
    VCS__PIPELINE__PROJECT_ID: "$CI_PROJECT_ID"
    VCS__PIPELINE__MERGE_REQUEST_ID: "$CI_MERGE_REQUEST_IID"
    VCS__HTTP_CLIENT__API_URL: "$CI_SERVER_URL"
    VCS__HTTP_CLIENT__API_TOKEN: "$CI_JOB_TOKEN"
  allow_failure: true  # Optional: don't block pipeline if AI review fails

🔗 Full example: ./docs/ci/gitlab.yaml


📘 Documentation

See these folders for reference templates and full configuration options:

  • ./docs/ci — CI/CD integration templates (GitHub Actions, GitLab CI, Bitbucket Pipelines, Jenkins)
  • ./docs/cli — CLI command reference and usage examples
  • ./docs/hooks — hook reference and lifecycle events
  • ./docs/configs — full configuration examples (.yaml, .json, .env)
  • ./docs/prompts — prompt templates for Python/Go (light & strict modes)

⚠️ Privacy & Responsibility Notice

AI Review does not store, log, or transmit your source code to any external service other than the LLM provider explicitly configured in your .ai-review.yaml.

All data is sent directly from your CI/CD environment to the selected LLM API endpoint (e.g. OpenAI, Gemini, Claude, OpenRouter). No intermediary servers or storage layers are involved.

If you use Ollama, requests are sent to your local or self-hosted Ollama runtime
(by default http://localhost:11434). This allows you to run reviews completely offline, keeping all data strictly inside your infrastructure.

⚠️ Please ensure you use proper API tokens and avoid exposing corporate or personal secrets. If you accidentally leak private code or credentials due to incorrect configuration (e.g., using a personal key instead of an enterprise one), it is your responsibility — the tool does not retain or share any data by itself.


🧠 AI Review — open-source AI-powered code reviewer

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xai_review_oleg_fork-1.2.0.tar.gz (154.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

xai_review_oleg_fork-1.2.0-py3-none-any.whl (303.7 kB view details)

Uploaded Python 3

File details

Details for the file xai_review_oleg_fork-1.2.0.tar.gz.

File metadata

  • Download URL: xai_review_oleg_fork-1.2.0.tar.gz
  • Upload date:
  • Size: 154.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for xai_review_oleg_fork-1.2.0.tar.gz
Algorithm Hash digest
SHA256 da89e82266646d1421cb0c14d04d71a9eacc6ae89ebdec219fa80ffb6db52819
MD5 e1341885574e4d7b8c0b774be8c21ad7
BLAKE2b-256 8afe98f0a31cc17ca53b80640b635520f16c1d6a67a1ec301c809137ce4df514

See more details on using hashes here.

File details

Details for the file xai_review_oleg_fork-1.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for xai_review_oleg_fork-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f1e09a801d9ca8ea8fd9024cf01b3bbbd0f016723a22dd1f8aa2e4ae63e023df
MD5 652a0f1c716da415a2722dd0e5b5e412
BLAKE2b-256 30ec0a787e3664f73bd863412e114e5cdb6f1b599efc93b36c591e11419e7108

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page