Beatrica is a tool for code review automation using large language models.
Project description
Beatrica
beatrica
is an innovative Python package that utilizes Large Language Models (LLMs) such as OpenAI's GPT and MistralAI for automating code review processes. It provides insights by analyzing differences between the current active branch and a specified base branch, processing code changes, and generating reviews to improve code quality and collaboration.
Installation
Integrate cutting-edge code review automation into your workflow by installing beatrica
using pip:
pip install beatrica
Configuring the API Key
For Beatrica to communicate with LLM services like OpenAI or MistralAI, an API key is required. You can provide this API key in two ways:
Using an Environment Variable
Set the API key as an environment variable before running Beatrica. This method is recommended for keeping your API key secure and not directly visible in command line history.
For Unix-like systems (Linux/macOS):
export LLM_API_KEY='your_api_key_here'
For Windows Command Prompt:
set LLM_API_KEY=your_api_key_here
For Windows PowerShell:
$env:LLM_API_KEY='your_api_key_here'
After setting the environment variable, you can run Beatrica without explicitly passing the API key in the command:
beatrica --base_branch=main --llm_type=openai
Using the Command Line Argument
If you prefer or need to, you can directly pass the API key as a command line argument. However, be mindful of security implications such as exposing your API key in shell history or logs.
beatrica --base_branch=main --llm_type=openai --api_key='your_api_key_here'
Choose the method that best suits your workflow and security practices.
Usage
Beatrica offers a CLI interface for specifying a base branch and comparing its differences with the current active branch, generating insightful reviews. Here are examples showcasing its usage with various configurations:
Basic Usage
To analyze code differences with the main
branch using the default OpenAI's GPT-4 model, use:
beatrica --base_branch=main --llm_type=openai
This command compares the current active branch against the main
branch.
Specifying LLM Type and Model
For a tailored review, specify the LLM (openai
or mistralai
) and model name:
beatrica --base_branch=main --llm_type=mistralai --model_name=mistral-large-latest
Advanced Configuration
Customize further by specifying the API key (from an environment variable or directly), the maximum tokens for the LLM response, and the output format:
beatrica --base_branch=develop --llm_type=openai --model_name=gpt-4-0125-preview --api_key=YOUR_API_KEY --max_tokens=1000 --output=beatrica_review.txt
Example Output
An example output of Beatrica reviewing changes between the current active branch and the main
branch is:
Analyzing differences with base branch main using OpenAI GPT-4-0125-preview...
Tracking changes...
✅ Changes tracked.
✅ Found 2 changes.
Initializing language model...
Generating code embeddings...
✅ Embeddings generated.
Checking if Beatrica can review the changes...
✅ 1 changes found for review.
Starting the review process...
Reviewing changes ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:34
✅ Review process completed.
Aggregating reviews...
Aggregating reviews ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:20
✅ Reviews aggregated.
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Commit ID ┃ Change Description ┃ Review ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ e08642828750dc15786c1166898fe30fa500c17c │ Added comment for test_generate_crsid_with_empty_first function │ The commit message "add comment for │
│ │ │ test_generate_crsid_with_empty_first" could be │
│ │ │ more descriptive to convey the purpose and │
│ │ │ importance of the added comment. A suggestion for │
│ │ │ a more informative commit message is "Explain test │
│ │ │ purpose for CRSID generation with empty first name │
│ │ │ in comments". This provides clear context on the │
│ │ │ change's significance. While adding a comment to │
│ │ │ the test function is a step in the right direction │
│ │ │ for code clarity, the comment itself could be more │
│ │ │ descriptive. It should explain not just what is │
│ │ │ being tested (generate_crsid with an empty first │
│ │ │ name), but also the expected outcome or behavior │
│ │ │ of the function under this condition. This would │
│ │ │ help in understanding the test's intention without │
│ │ │ diving into the implementation details. The test │
│ │ │ function "test_generate_crsid_with_empty_first()" │
│ │ │ could have a more descriptive name that reflects │
│ │ │ the expected behavior or outcome of the test, such │
│ │ │ as │
│ │ │ "test_crsid_generation_fails_with_empty_first_nam… │
│ │ │ or │
│ │ │ "test_crsid_generation_handles_empty_first_name_g… │
│ │ │ depending on the expected behavior. This makes it │
│ │ │ easier to understand the test's purpose at a │
│ │ │ glance. It is recommended to include input │
│ │ │ validation in the generate_crsid function to │
│ │ │ ensure that the arguments passed (first_name and │
│ │ │ last_name) are strings. This would prevent │
│ │ │ potential runtime errors and make the function │
│ │ │ more robust against incorrect usage. Considering │
│ │ │ the importance of ensuring uniqueness in CRSID │
│ │ │ generation, the function should include a │
│ │ │ mechanism to handle scenarios where uniqueness │
│ │ │ cannot be guaranteed. This could be due to │
│ │ │ multiple individuals sharing the same first and │
│ │ │ last names. Implementing a strategy to handle such │
│ │ │ cases gracefully would improve the function's │
│ │ │ reliability. Adding additional test cases to cover │
│ │ │ edge scenarios, such as both first and last names │
│ │ │ being empty or containing special characters or │
│ │ │ numbers, would enhance the test suite's │
│ │ │ comprehensiveness. This ensures that the function │
│ │ │ behaves as expected under various inputs. │
└──────────────────────────────────────────┴─────────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────┘
Features
- Analyzes differences between the current active branch and a specified base branch.
- Integration with major LLMs for code review: OpenAI and MistralAI.
- Customizable for different base branches, LLMs, and output formats.
- Simplified CLI for ease of use.
Contributing
Contributions, issues, and
feature requests are welcome. For more information, check our issues page.
License
beatrica
is made available under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file beatrica-0.0.14.tar.gz
.
File metadata
- Download URL: beatrica-0.0.14.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 225d99d3df47e8ca9deea6146fd9cfa8a78242156912ec124befed89ca70dd63 |
|
MD5 | 318c62dfd9753d33cebcdfca5b4d6366 |
|
BLAKE2b-256 | 0b35dc90f98d5f5a8a10b9012ca2c80911205a9c613023403004e96c247fde6a |
File details
Details for the file beatrica-0.0.14-py3-none-any.whl
.
File metadata
- Download URL: beatrica-0.0.14-py3-none-any.whl
- Upload date:
- Size: 20.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ea1b92987f4edf43d2b0104727f7c9a2d7c25617ca10ec7d6aa718f8aaa81ae6 |
|
MD5 | 88570a7c34df1b8f1813296c4ec04025 |
|
BLAKE2b-256 | 2ac9095e27245cc54b7dce1cac1902556ec468a9286b4ec6dbf06f7b2e501106 |