Skip to main content

A Python package for automated ML model benchmarking and comparison

Project description

Automated Machine Learning Benchmarking Library

🚀 AutoMLBench provides a seamless way to compare machine learning models, preprocess data, evaluate performance, and optimize models with hyperparameter tuning.


📌 Installation

Ensure all dependencies are installed:

pip install pandas scikit-learn numpy matplotlib xgboost lightgbm catboost imbalanced-learn

Install from pypi:

pip install automlbench
'''

For local development:
```bash
git clone https://github.com/AnnNaserNabil/automlbench.git
cd automlbench
pip install -e .
'''

## **Model Comparison Without Hyperparameter Tuning**
The simplest way to compare models using **AutoMLBench**.

### **1️⃣ Load Dataset & Preprocess**
```python
import pandas as pd
from automlbench import preprocess_data, get_models, train_models, evaluate_model, plot_performance

# Load dataset
url = "DATAPATH"
df = pd.read_csv(url) 

# Define target column
target_column = "Name OF the Target Column"

# Preprocess data
X_train, X_test, y_train, y_test = preprocess_data(df, target_column)

2️⃣ Train All Default Models

# Get predefined models
models = get_models()

# Train models without tuning
results = train_models(X_train, X_test, y_train, y_test)

# Print model performance results
print(results)

3️⃣ Evaluate & Compare Model Performance

# Evaluate all models
for model_name, model in models.items():
    print(f"Evaluating {model_name}...")
    metrics = evaluate_model(model.fit(X_train, y_train), X_test, y_test)
    print(metrics)

# Plot performance comparison
plot_performance(results, metrics=["Accuracy", "Precision", "Recall", "F1-Score", "RMSE"])

🔹 Model Comparison With Hyperparameter Tuning

For better performance, use hyperparameter tuning.

1️⃣ Get Hyperparameter Grids

from automlbench import get_hyperparameter_grids, tune_hyperparameters

# Retrieve hyperparameter grids
hyperparameter_grids = get_hyperparameter_grids()

2️⃣ Tune Models

best_models = {}

# Tune each model if it has a predefined hyperparameter grid
for model_name, model in models.items():
    if model_name in hyperparameter_grids:
        print(f"Tuning {model_name}...")
        best_model, best_params = tune_hyperparameters(model, hyperparameter_grids[model_name], X_train, y_train)
        best_models[model_name] = best_model
        print(f"Best params for {model_name}: {best_params}")
    else:
        best_models[model_name] = model  # Use default if no tuning grid

3️⃣ Train Tuned Models

# Train models using the best hyperparameters found
tuned_results = train_models(
    X_train, X_test, y_train, y_test, 
    selected_models=list(best_models.keys()), 
    hyperparams={name: model.get_params() for name, model in best_models.items()}
)

# Display tuned model results
print(tuned_results)

4️⃣ Evaluate & Compare Tuned Models

# Evaluate all tuned models
for model_name, model in best_models.items():
    print(f"Evaluating {model_name}...")
    metrics = evaluate_model(model.fit(X_train, y_train), X_test, y_test)
    print(metrics)

# Plot comparison of tuned models
plot_performance(tuned_results, metrics=["Accuracy", "Precision", "Recall", "F1-Score", "RMSE"])

⚡ Quick Summary

Basic Comparison – Train models with default settings.
Hyperparameter Tuning – Optimize models for better performance.
Evaluation & Visualization – Compare accuracy, precision, recall, F1-score, and RMSE.
Automated ML Benchmarking – Quickly assess multiple models with minimal code.


📌 Contributing

Contributions are welcome! To contribute:

  1. Fork the repository
  2. Create a new branch (feature-branch)
  3. Make changes & test (pytest tests/)
  4. Submit a pull request (PR)

📜 License

AutoMLBench is released under the MIT License.


This documentation makes it easy for users to compare models before and after tuning using AutoMLBench. 🚀 Let me know if you need modifications! 🔥

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

automlbench-0.1.5.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

automlbench-0.1.5-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file automlbench-0.1.5.tar.gz.

File metadata

  • Download URL: automlbench-0.1.5.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.7

File hashes

Hashes for automlbench-0.1.5.tar.gz
Algorithm Hash digest
SHA256 b52c5c780c611d2d2da5dd4405f9f5a0a24c8282c3b0d18db316a278173f5a9b
MD5 b235d36112d80dc9dcc287bebf2e03d2
BLAKE2b-256 4d62f64de7cb4a4d311045d9ec293d54c36ac3be0b78f9d2fe32e6e7dcb6c512

See more details on using hashes here.

File details

Details for the file automlbench-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: automlbench-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.7

File hashes

Hashes for automlbench-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 56511e74d3a2ccaaa6f33d806bd99e44cd9be8b3fea228c7e015141b606780a2
MD5 b86329b46d3c0a8c8a54ccff1b461dd8
BLAKE2b-256 f8d3d3046b5bbe1edf10d6dd5c4e307ca812de33d27c56b0ddb13726690cefac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page