A Python package for automated ML model benchmarking and comparison
Project description
Automated Machine Learning Benchmarking Library
🚀 AutoMLBench provides a seamless way to compare machine learning models, preprocess data, evaluate performance, and optimize models with hyperparameter tuning.
📌 Installation
Ensure all dependencies are installed:
pip install pandas scikit-learn numpy matplotlib xgboost lightgbm catboost imbalanced-learn
Install from pypi:
pip install automlbench
For local development:
```bash
git clone https://github.com/AnnNaserNabil/automlbench.git
cd automlbench
pip install -e .
## **🔹 Model Comparison Without Hyperparameter Tuning**
The simplest way to compare models using **AutoMLBench**.
### **1️⃣ Load Dataset & Preprocess**
```python
import pandas as pd
from automlbench import preprocess_data, get_models, train_models, evaluate_model, plot_performance
# Load dataset
url = "DATAPATH"
df = pd.read_csv(url)
# Define target column
target_column = "Name OF the Target Column"
# Preprocess data
X_train, X_test, y_train, y_test = preprocess_data(df, target_column)
2️⃣ Train All Default Models
# Get predefined models
models = get_models()
# Train models without tuning
results = train_models(X_train, X_test, y_train, y_test)
# Print model performance results
print(results)
3️⃣ Evaluate & Compare Model Performance
# Evaluate all models
for model_name, model in models.items():
print(f"Evaluating {model_name}...")
metrics = evaluate_model(model.fit(X_train, y_train), X_test, y_test)
print(metrics)
# Plot performance comparison
plot_performance(results, metrics=["Accuracy", "Precision", "Recall", "F1-Score", "RMSE"])
🔹 Model Comparison With Hyperparameter Tuning
For better performance, use hyperparameter tuning.
1️⃣ Get Hyperparameter Grids
from automlbench import get_hyperparameter_grids, tune_hyperparameters
# Retrieve hyperparameter grids
hyperparameter_grids = get_hyperparameter_grids()
2️⃣ Tune Models
best_models = {}
# Tune each model if it has a predefined hyperparameter grid
for model_name, model in models.items():
if model_name in hyperparameter_grids:
print(f"Tuning {model_name}...")
best_model, best_params = tune_hyperparameters(model, hyperparameter_grids[model_name], X_train, y_train)
best_models[model_name] = best_model
print(f"Best params for {model_name}: {best_params}")
else:
best_models[model_name] = model # Use default if no tuning grid
3️⃣ Train Tuned Models
# Train models using the best hyperparameters found
tuned_results = train_models(
X_train, X_test, y_train, y_test,
selected_models=list(best_models.keys()),
hyperparams={name: model.get_params() for name, model in best_models.items()}
)
# Display tuned model results
print(tuned_results)
4️⃣ Evaluate & Compare Tuned Models
# Evaluate all tuned models
for model_name, model in best_models.items():
print(f"Evaluating {model_name}...")
metrics = evaluate_model(model.fit(X_train, y_train), X_test, y_test)
print(metrics)
# Plot comparison of tuned models
plot_performance(tuned_results, metrics=["Accuracy", "Precision", "Recall", "F1-Score", "RMSE"])
⚡ Quick Summary
✅ Basic Comparison – Train models with default settings.
✅ Hyperparameter Tuning – Optimize models for better performance.
✅ Evaluation & Visualization – Compare accuracy, precision, recall, F1-score, and RMSE.
✅ Automated ML Benchmarking – Quickly assess multiple models with minimal code.
📌 Contributing
Contributions are welcome! To contribute:
- Fork the repository
- Create a new branch (
feature-branch) - Make changes & test (
pytest tests/) - Submit a pull request (PR)
📜 License
AutoMLBench is released under the MIT License.
This documentation makes it easy for users to compare models before and after tuning using AutoMLBench. 🚀 Let me know if you need modifications! 🔥
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file automlbench-0.1.4.tar.gz.
File metadata
- Download URL: automlbench-0.1.4.tar.gz
- Upload date:
- Size: 10.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.8.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
43c0d90e2dcf80b18ae7f3ef303689eae536301d33960b897dc6a8afdd759407
|
|
| MD5 |
36879530eda942d26688f130be437b2d
|
|
| BLAKE2b-256 |
4bfded0855bcd95a1529d6f338965f81fd876bdf25ce9ebdef6c1f4b777a65d4
|
File details
Details for the file automlbench-0.1.4-py3-none-any.whl.
File metadata
- Download URL: automlbench-0.1.4-py3-none-any.whl
- Upload date:
- Size: 12.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.8.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f702e083332da14c7126125f7305d3a8d7e227271e08a759d644e28e2d38a1ba
|
|
| MD5 |
ef7dacb6e48b458eb34ac7bfc1166680
|
|
| BLAKE2b-256 |
27d5c9dc050e5f771b21d882cc86c9d07a567d4c5787ce29ef5acf7ea0d30f01
|