Skip to main content

A Python package for automated ML model benchmarking and comparison

Project description

Automated Machine Learning Benchmarking Library

🚀 AutoMLBench provides a seamless way to compare machine learning models, preprocess data, evaluate performance, and optimize models with hyperparameter tuning.


📌 Installation

Ensure all dependencies are installed:

pip install pandas scikit-learn numpy matplotlib xgboost lightgbm catboost imbalanced-learn

Install from pypi:

pip install automlbench

For local development:

git clone https://github.com/AnnNaserNabil/automlbench.git
cd automlbench
pip install -e .

Model Comparison Without Hyperparameter Tuning

The simplest way to compare models using AutoMLBench.

1️⃣ Load Dataset & Preprocess

import pandas as pd
from automlbench import preprocess_data, get_models, train_models, evaluate_model, plot_performance

# Load dataset
url = "DATAPATH"
df = pd.read_csv(url) 

# Define target column
target_column = "Name OF the Target Column"

# Preprocess data
X_train, X_test, y_train, y_test = preprocess_data(df, target_column)

2️⃣ Train All Default Models

# Get predefined models
models = get_models()

# Train models without tuning
results = train_models(X_train, X_test, y_train, y_test)

# Print model performance results
print(results)

3️⃣ Evaluate & Compare Model Performance

# Evaluate all models
for model_name, model in models.items():
    print(f"Evaluating {model_name}...")
    metrics = evaluate_model(model.fit(X_train, y_train), X_test, y_test)
    print(metrics)

# Plot performance comparison
plot_performance(results, metrics=["Accuracy", "Precision", "Recall", "F1-Score", "RMSE"])

🔹 Model Comparison With Hyperparameter Tuning

For better performance, use hyperparameter tuning.

1️⃣ Get Hyperparameter Grids

from automlbench import get_hyperparameter_grids, tune_hyperparameters

# Retrieve hyperparameter grids
hyperparameter_grids = get_hyperparameter_grids()

2️⃣ Tune Models

best_models = {}

# Tune each model if it has a predefined hyperparameter grid
for model_name, model in models.items():
    if model_name in hyperparameter_grids:
        print(f"Tuning {model_name}...")
        best_model, best_params = tune_hyperparameters(model, hyperparameter_grids[model_name], X_train, y_train)
        best_models[model_name] = best_model
        print(f"Best params for {model_name}: {best_params}")
    else:
        best_models[model_name] = model  # Use default if no tuning grid

3️⃣ Train Tuned Models

# Train models using the best hyperparameters found
tuned_results = train_models(
    X_train, X_test, y_train, y_test, 
    selected_models=list(best_models.keys()), 
    hyperparams={name: model.get_params() for name, model in best_models.items()}
)

# Display tuned model results
print(tuned_results)

4️⃣ Evaluate & Compare Tuned Models

# Evaluate all tuned models
for model_name, model in best_models.items():
    print(f"Evaluating {model_name}...")
    metrics = evaluate_model(model.fit(X_train, y_train), X_test, y_test)
    print(metrics)

# Plot comparison of tuned models
plot_performance(tuned_results, metrics=["Accuracy", "Precision", "Recall", "F1-Score", "RMSE"])

⚡ Quick Summary

Basic Comparison – Train models with default settings.
Hyperparameter Tuning – Optimize models for better performance.
Evaluation & Visualization – Compare accuracy, precision, recall, F1-score, and RMSE.
Automated ML Benchmarking – Quickly assess multiple models with minimal code.


📌 Contributing

Contributions are welcome! To contribute:

  1. Fork the repository
  2. Create a new branch (feature-branch)
  3. Make changes & test (pytest tests/)
  4. Submit a pull request (PR)

📜 License

AutoMLBench is released under the MIT License.


This documentation makes it easy for users to compare models before and after tuning using AutoMLBench. 🚀 Let me know if you need modifications! 🔥

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

automlbench-0.1.5.1.tar.gz (10.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

automlbench-0.1.5.1-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file automlbench-0.1.5.1.tar.gz.

File metadata

  • Download URL: automlbench-0.1.5.1.tar.gz
  • Upload date:
  • Size: 10.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.7

File hashes

Hashes for automlbench-0.1.5.1.tar.gz
Algorithm Hash digest
SHA256 709ee0893da2eb0dfe8e7e40cbcadb7df609587e14433b21b4e7f28f1ca61264
MD5 15880e065e2f18aa6d844831d2878fac
BLAKE2b-256 4bd973786de336cd0050ec59860e26e367efe8ffabf74ccbcde09dd5e0aaf8f7

See more details on using hashes here.

File details

Details for the file automlbench-0.1.5.1-py3-none-any.whl.

File metadata

  • Download URL: automlbench-0.1.5.1-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.7

File hashes

Hashes for automlbench-0.1.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ebfbcbdebfb0dd727522716c4fd17a79cb34ae5fb99cccd95752bcc38ee6b1e6
MD5 ead730d665bbd1887bf513d324cb7739
BLAKE2b-256 402bd22d6130284f24cd799d4ff92aeddee1d4aced907e8aec8ccc5604ce70f7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page