Last translated: 26 Jun 2025

Translation Not Available Yet

This repository's README hasn't been translated yet. Once translated, it will be easier to read and understand in your native language (中文).

After translating, add the links to README so others can view it directly.

Deutsch | English | Español | français | 日本語 | 한국어 | Português | Русский | 中文

AI Code Review Helper

An LLM-based automated code review assistant. Monitors PR/MR changes via GitHub/GitLab Webhooks, analyzes code using AI, and automatically posts review comments to PR/MR, with support for multiple notification channels.

Watch demo video on Bilibili

Due to GitHub restrictions, video embedding is not directly supported.

Key Features

  • Multi-platform Support: Integrates with GitHub and GitLab Webhooks to monitor Pull Request/Merge Request events.
  • Smart Review Modes:
    • Detailed Review (/github_webhook, /gitlab_webhook): AI analyzes each changed file to identify specific issues. Review comments are posted to PR/MR in a structured format (e.g., pinpointing code lines, issue classification, severity, analysis, and suggestions). The AI model outputs JSON-formatted analysis, which the system converts into individual comments.
    • General Review (/github_webhook_general, /gitlab_webhook_general): AI performs holistic analysis of each changed file and generates a single Markdown-formatted summary comment per file.
  • Automated Workflow:
    • Automatically posts AI review comments (multiple in detailed mode, one per file in general mode) to PR/MR.
    • Posts a summary comment in PR/MR after all files are reviewed.
    • Even if no issues are found, posts a friendly notification and summary.
    • Processes review tasks asynchronously for fast Webhook response.
    • Uses Redis to prevent duplicate reviews for the same Commit.
  • Flexible Configuration:
    • Basic settings via environment variables.
    • Provides a web admin panel (/admin) and API (/config/*) for managing:
      • Webhook Secrets and Access Tokens for GitHub/GitLab repos/projects.
      • LLM parameters (API Key, Base URL, Model).
      • Notification Webhook URLs (WeCom, custom Webhooks).
      • Viewing AI review history.
    • Uses Redis for persistent storage of configurations and review results.
  • Notifications & Logging:
    • Sends review summaries (with PR/MR links, branch info, and result overviews) to WeCom and custom Webhooks.
    • Stores review results in Redis (default 7-day TTL), accessible via admin panel, with automatic cleanup for closed/merged PRs/MRs.
  • Deployment: Supports Docker or direct Python application execution.

🚀 Quick Start

🐳 Quick Deployment

# 使用官方镜像
docker run -d -p 8088:8088 \
  -e ADMIN_API_KEY="your-key" \
  -e OPENAI_API_BASE_URL="https://api.openai.com/v1" \
  -e OPENAI_API_KEY="your-key" \
  -e OPENAI_MODEL="gpt-4o" \
  -e REDIS_HOST="your-redis-host" \
  -e REDIS_PASSWORD="your-redis-pwd"
  --name ai-code-review-helper \
  dingyufei/ai-code-review-helper:latest

📌 Required environment variables:

  • ADMIN_API_KEY - Admin panel password (default: change_this_unified_secret_key)
  • OPENAI_API_KEY - AI service key
  • REDIS_HOST - Redis address

Configuration

1. Environment Variables (Key Selections)

  • ADMIN_API_KEY: Required. Secret key for admin interface protection (default: change_this_unified_secret_key). Must be changed.
  • OPENAI_API_KEY: Required. OpenAI API key.
  • OPENAI_MODEL: (Default: gpt-4o) OpenAI model to use.
  • OPENAI_API_BASE_URL: (Optional) OpenAI API base URL (format: http(s)://xxxx/v1; default: https://api.openai.com/v1).
  • WECOM_BOT_WEBHOOK_URL: (Optional) WeCom bot Webhook URL.
  • REDIS_HOST: Required. Redis server address. Service won't start without this.
  • REDIS_PORT: (Default: 6379) Redis server port.
  • REDIS_PASSWORD: (Optional) Redis password.
  • REDIS_DB: (Default: 0) Redis database number.
  • REDIS_SSL_ENABLED: (Default: true) Enable SSL for Redis connections. Set to false to disable.
  • (More variables like SERVER_HOST, SERVER_PORT, GITHUB_API_URL, GITLAB_INSTANCE_URL available—see startup logs or source code.)

2. Admin Panel & API

  • Admin Panel (/admin): Web interface for:
    • Configuring Webhook Secrets and Access Tokens for GitHub repos/GitLab projects.
    • Managing global LLM parameters (OpenAI API Key, Base URL, Model).
    • Setting global notification Webhook URLs (WeCom bot, custom Webhooks).
    • Viewing/managing historical AI review records.
    • All admin operations require authentication via the ADMIN_API_KEY environment variable.
  • Config API: RESTful API (/config/*) for programmatic management of above configurations, also requiring X-Admin-API-Key header authentication.

Persistence:

  • Redis (Required): Stores repo/project configs, processed Commit SHAs, and AI review results (default 7-day TTL; records for closed/merged PRs/MRs are cleaned automatically). Service depends heavily on Redis.
  • Environment Variables: Primarily for loading global configs (e.g., LLM Key/URL, Redis connection info, Admin API Key). Runtime modifications via admin panel take immediate effect but revert to env vars after service restart. Recommended to manage persistent global configs via env vars.

Usage

  1. Start Service: Use Docker or run Python app directly, ensuring required env vars (ADMIN_API_KEY, OPENAI_API_KEY, REDIS_HOST, etc.) are set.
  2. Configure Repos/Projects: Via admin panel (/admin) or API, add target GitHub repo/GitLab project configs, including Webhook Secret and Access Token with PR/MR comment permissions.
  3. Set Up Webhook: In GitHub/GitLab repo/project settings:
    • Payload URL: Your service address with appropriate Webhook endpoint (see below).
    • Content type: application/json.
    • Secret: Webhook Secret configured in admin panel.
    • Events: GitHub: "Pull requests"; GitLab: "Merge request events".
  4. Trigger Review: Create/update PR/MR to initiate automated code review.

Review Mode Selection

  • Detailed Review (Endpoints: /github_webhook, /gitlab_webhook):
    • AI attempts granular analysis of each file's changes, aiming to generate structured review comments targeting specific code lines (where possible).
    • Each issue is posted as a separate comment to the PR/MR's relevant code line (if locatable) or as part of a general comment.
    • In this mode, the AI model is instructed to output JSON-formatted analysis, which the system parses into PR/MR comments.
    • Best for models with strong instruction-following and stable JSON output (e.g., GPT-4 series). If comments are missing or suboptimal, check app logs for LLM output compliance with expected JSON format.
  • General Review (Endpoints: /github_webhook_general, /gitlab_webhook_general):
    • AI performs high-level analysis of each changed file, generating a single Markdown-formatted review comment per file.
    • Suitable for macro-level file change assessments or when detailed review mode produces unstable output.

Development Mode

# 1. 克隆仓库
git clone https://github.com/dingyufei615/ai-code-review-helper.git
cd ai-code-review-helper

# 2. 创建并激活虚拟环境
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# 3. 安装依赖
pip install -r requirements.txt

# 4. 配置环境变量 (参考 .env.example 或 配置 部分)

# 5. 启动服务
python -m api.ai_code_review_helper

# 6. 运行测试 (可选)
python -m unittest discover tests

Notes

  • Security: Always use a strong ADMIN_API_KEY and safeguard all Tokens/Secrets.
  • Cost: Monitor API call costs for your LLM service.
  • Logging: Service outputs detailed runtime logs to console for troubleshooting.
  • Redis Dependency: Service heavily relies on Redis for config and result storage.

Contribution

90% of this code was collaboratively developed by Aider + Gemini.
Pull Requests and Issues are welcome.