Skip to main content

Bitefix is an efficient library designed to streamline Python Runtime error debugging with AI-powered decorators.

Project description

BiteFix 🛠️

Last commit Repo size Stars Forks

BiteFix is an advanced and efficient tool designed to revolutionize the error-fixing process using the capabilities of Large Language Models (LLM). It is a Python library that offers a range of decorators to help you debug your code and fix errors in a matter of seconds.

Table of Contents 📋

Introduction

By offering decorators, BiteFix empowers you to enhance the error-handling experience in your functions. When a decorated function encounters an error, the decorator orchestrates a team of AI Agents, each specializing in a unique aspect of error resolution. Here's a brief overview of the AI Agents:

🕵️ Python code Diagnosis Expert: Swiftly analyzes the code to pinpoint the root cause of the error.

👨‍💻 Senior Python Developer: Provides insightful ideas on how to rectify the issue within the decorated function's code.

👩‍💼 Lead Python Developer: Meticulously evaluates ideas, selecting the most effective one for implementation.

👨‍💻 Python Code Developer: Skillfully rewrites the code to bring the chosen idea to life, ultimately fixing the error.

BiteFix simplifies the error-fixing journey by seamlessly combining the expertise of these AI Agents, ensuring a smoother and more efficient debugging process for your Python code.

Technologies-Used

  • Python
  • langchain
  • crewai
  • gpt-4

Getting-Started

  • Install BiteFix using pip install bitefix
  • Explore the powerful decorators to streamline your error-fixing process!
  • Check out the examples to understand how BiteFix works.

Happy Coding! 🚀

Examples

Let's take a look at some examples to understand how BiteFix works. Bitefix offers two decorators: @resolve and @resolve_with_openai.

Example 1 : Using Open AI Models

Let's say you have a function that is supposed to return longest length of subsequence in a given list. We can use @resolve_with_openai decorator with our function to help us resolve the error if the function fails while execution.

from bitefix import resolve_with_openai

@resolve_with_openai(openai_api_key="YOUR_OPENAI_KEY", model_name="gpt-4", temperature=0.7, export_dir = "export", verbose=True)
def length_of_lis(nums):
    if not nums:
        return 0

    dp = [1] * len(nums)

    for i in range(1, len(nums)):
        for j in range(i):
            if nums[i] > nums[j]:
                dp[i] = dp[i + 1]

    return max(dp)

openai_api_key : The OpenAI API key to use the OpenAI's model. You can get the API key from here. model_name : The name of the model to be used. By default, it uses OpenAI's gpt-4 model. temperature : The temperature parameter for the model. By default, it is set to 0.7. export_dir : The directory path to export Error Resoltion Report by BiteFix AI Agents and fixed code python file. It will export the report if a directory is provided. By default, it is set to None. verbose : If set to True, it will print the debugging steps. By default, it is set to True.

Now, let's call the function and see how it works.

input_list = [10, 9, 2, 5, 3, 7, 101, 18]
result = length_of_lis(input_list)
print(result)

Here is the recorded output of the function execution - bitefix_decorator_example

We can see how this decorator provided us step by step debugging of the function in case of failure by using crew of Python AI coders. It also provided us with the solution to the error.

Example 2 : Using Open Source Models

If we want to use some other Large Language Model instead of OpenAI, we can use @resolve decorator with our function to help us resolve the error if the function fails while execution. This helps us to use any custom trained model as well for error resolution.

For this example, let's use Openhermes model from Ollama. You can download Ollama from here. Ollama allows you to run open-source large language models, such as Openhermes, Llama 2, locally.

After downloading Ollama, install it. Then run the following command to pull the Openhermes model.

ollama pull openhermes

Now, we can use the model with the decorator @resolve as follows.

from bitefix import resolve
from langchain_community.llms import Ollama

llm = Ollama("openhermes")

@resolve(llm = llm, export_dir = None, verbose=True)
def length_of_lis(nums):
    if not nums:
        return 0

    dp = [1] * len(nums)

    for i in range(1, len(nums)):
        for j in range(i):
            if nums[i] > nums[j]:
                dp[i] = dp[i + 1]

    return max(dp)

Note : Here export_dir and verbose are optional parameters.

Similarly, we can use any other Large Language Model with the decorator.

Contributing

Contributions are always welcome!

If you find any issue or have suggestions for improvements, please submit them as Github issues or pull requests.

Here is the steps you can follow to contribute to this project:

  1. Fork the project on Github.
  2. Clone the forked project to your local machine.
  3. Create a virtual environment using python -m venv venv.
  4. Activate the virtual environment using venv\Scripts\activate on Windows or source venv/bin/activate on Mac/Linux
  5. Install the dependencies using pip install -r requirements.txt.
  6. Make the required changes.
  7. Format the code using black ..
  8. Create a pull request.

Feedback

'bitefix' library is just a small step towards making the error-fixing process more efficient using the capabilities of Large Language Models. We have to go a long way to make this better. Feel free to send me feedback at dataaienthusiast128@gmail.com. Let me know if you have any suggestions on how to make this project better.

If you liked the project support it by giving a star :star: to this repo.

Contact

linkedinGitHub

License

This project is licensed under the terms of the MIT license

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bitefix-0.1.0.tar.gz (13.8 kB view details)

Uploaded Source

Built Distribution

bitefix-0.1.0-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file bitefix-0.1.0.tar.gz.

File metadata

  • Download URL: bitefix-0.1.0.tar.gz
  • Upload date:
  • Size: 13.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for bitefix-0.1.0.tar.gz
Algorithm Hash digest
SHA256 adb455daecf829409a433e4e26fb743f44cc23a397e06ae8094bed01c7c15704
MD5 71cddf69076e3eba102405c18f35e350
BLAKE2b-256 46dd6f9da9b2e9304a3f949c4da5b9867cab41c5b99f63c248ed4719bd44ae8f

See more details on using hashes here.

File details

Details for the file bitefix-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: bitefix-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for bitefix-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6628adb8461c3a2db9bc21d6b9fc81af64e38c36776125cb5bee3041e750c318
MD5 0298caf3e579b317a824f17ce3d6da83
BLAKE2b-256 4ea0fc192014b05414de3d297cc15938b71884c70430a06531c98b00efa8d300

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page