OptimAI is a powerful Python module designed to optimize your code by analyzing its performance and providing actionable suggestions. It leverages a large language model (LLM) to give you detailed insights and recommendations based on the profiling data collected during the execution of your code.
Project description
OptimAI
OptimAI is a powerful Python module designed to optimize your code by analyzing its performance and providing actionable suggestions. It leverages a large language model (LLM) to give you detailed insights and recommendations based on the profiling data collected during the execution of your code. This module supports various kinds of profilers from the perfwatch package.
Features
- Custom decorators to optimize functions with ease.
- Integration with perfwatch for performance profiling.
- Capture and analyze stdout, function execution time, network usage, function calls, CPU/GPU usage, etc using perfwatch.
- Seamless integration with various LLMs for code optimization suggestions.
- Support for OpenAI, Google Gemini, HuggingFace (offline), ollama and Anthropic.
- Optimized prompts for best performance on any LLM using dspy
Installation
You can install OptimAI using pip:
pip install optimizeai
Setup
To use OptimAI, you need to configure it with your preferred LLM provider and API key. Supported LLM providers include Google (Gemini models), OpenAI, Ollama, HuggingFace and Anthropic. For Ollama you need to have Ollama installed and the model artifacts also need to be downloaded previously.
-
Select the LLM Provider:
- For Google Gemini models:
llm = "google"
- For OpenAI models:
llm = "openai"
- For Hugging Face offline:
llm = "huggingface"
- For Anthropic models:
llm = "anthropic"
- For local Ollama models:
llm = "ollama"
- For Google Gemini models:
-
Choose the Model:
- Example:
model = "gpt-4"
,model = "gemini-1.5-flash"
,model = "codegemma"
, or any other model specific to the chosen LLM provider.
- Example:
-
Set the API Key:
- Use the corresponding API key for the selected LLM provider. No API key required for local Huggingface Inference and Ollama.
Sample Code
Here's a basic example demonstrating how to use OptimAI to optimize a function:
from optimizeai.decorators.optimize import optimize
from optimizeai.config import Config
from dotenv import load_dotenv
import time
import os
# Load environment variables
load_dotenv()
llm = os.getenv("LLM")
key = os.getenv("API_KEY")
model = os.getenv("MODEL")
# Configure LLM
llm_config = Config(llm=llm, model=model, key=key)
perfwatch_params = ["line", "cpu", "time"]
# Define a test function to be optimized
@optimize(config=llm_config, profiler_types=perfwatch_params)
def test():
for _ in range(10):
time.sleep(0.1)
print("Hello World!")
pass
if __name__ == "__main__":
test()
Setting Environment Variables
You can set the environment variables (LLM
, API_KEY
, MODEL
) in a .env
file for ease of use:
LLM=google
API_KEY=your_google_api_key
MODEL=gemini-1.5-flash
Upcoming Features
- Improved Context for Code Optimization: Enhance the context provided to the LLM for more accurate and relevant optimization recommendations.
- Report Generation: Proper optimization report will be generated.
- Support for a Better Config: A better config support is coming through which you can set various llm parameters.
Contributing
We welcome contributions to OptimAI! If you have an idea for a new feature or have found a bug, please open an issue on GitHub. If you'd like to contribute code, please fork the repository and submit a pull request.
Steps to Contribute
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes.
- Commit your changes (
git commit -m 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Open a pull request.
License
OptimAI is licensed under the MIT License. See the LICENSE file for more details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file optimizeai-1.0.0.tar.gz
.
File metadata
- Download URL: optimizeai-1.0.0.tar.gz
- Upload date:
- Size: 6.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-1023-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d7552667da2ea7bace7c4f3c95396c2cc0f6c1e918df37028b602af372fc724 |
|
MD5 | 9878a7d5a18088edd33b255bc2c9ef65 |
|
BLAKE2b-256 | c13cab601dcc0335bf03e395ad24f3307ea04efe05a4162b8aff573f391e5a3e |
File details
Details for the file optimizeai-1.0.0-py3-none-any.whl
.
File metadata
- Download URL: optimizeai-1.0.0-py3-none-any.whl
- Upload date:
- Size: 8.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.4 Linux/6.5.0-1023-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fa65fecc25893f8f4668f094e083f7b9b99a7b43926f3e1b14568852813f5c48 |
|
MD5 | 946ad5ea6a191c0c7a6bea98d1fc472a |
|
BLAKE2b-256 | 3757c1d64aad033a4e2e0ce7f6289bea4a606699d87925d7cd3d4af8fa2b000c |