Skip to main content

Model Selection Tool

Project description

๐Ÿง  Universal ML Model Explorer Pro

Python Platform License Maintained Status Build Downloads

One-line ML pipeline that preprocesses, trains, compares, and visualizes the best model โ€” automatically.

Automatically train, evaluate, compare, and visualize multiple machine learning models โ€” all with one command.

๐Ÿš€ Features

  • Auto detection: Classification or Regression
  • Auto preprocessing: Scaling, Encoding, Imputation, PCA
  • Parallel model training on all cores
  • SHAP interpretability plots
  • Beautiful visual reports (Confusion Matrix, ROC, Residuals, etc.)
  • CLI + Notebook compatible

๐Ÿ“ฆ Installation

pip install -r requirements.txt

๐Ÿงช CLI Usage

python main.py path/to/dataset.csv target_column_name

Optional flags:

  • --output_dir: Folder to save results (default: results)
  • --pca_components: Apply PCA on numeric features
  • --no_shap: Disable SHAP plot (faster)

๐Ÿงฌ Python Usage

from yourlib import run_pipeline_in_notebook

run_pipeline_in_notebook(
    dataset_path="data.csv",
    target_column="target",
    pca_components=5,
    no_shap=False
)

๐Ÿ“‚ Output

  • best_model.pkl: Trained model
  • Plots: Confusion Matrix, ROC, Residuals, SHAP
  • model_report.txt: Full model comparison

๐Ÿ› ๏ธ Supported Models

  • Linear, Tree-based, Ensemble (RF, GB, AdaBoost, XGBoost), KNN, SVM, Stacking
  • Auto selection of best based on Accuracy / Rยฒ

Run this in your terminal to install all dependencies

pip install pandas numpy matplotlib seaborn scikit-learn xgboost shap joblib rich

๐Ÿ” AutoFeatSelect

A Lightweight Python Library for Automatic Feature Selection
Smart. Fast. Interpretable.


๐Ÿš€ What is AutoFeatSelect?

AutoFeatSelect is a fully automated feature selection tool that cleans your dataset by removing irrelevant, redundant, or low-value featuresโ€”all with just one line of code. Whether youโ€™re building a classification model or regression model, this tool will help you improve model performance and training speed without the hassle of manual preprocessing.


โœจ Why AutoFeatSelect is Cool

  • โœ… Zero manual inspection โ€” It decides what to drop based on solid math.
  • ๐Ÿ”„ Handles both numeric & categorical features
  • ๐Ÿ“‰ Drops features using:
    • Missing value ratio
    • Low variance
    • Correlation (pairwise & clustered)
    • VIF (multicollinearity)
    • Mutual Information
    • Tree-based feature importance
  • ๐Ÿ“„ Detailed drop report (feature + reason)
  • ๐Ÿชถ Lightweight: Only uses pandas, numpy, scikit-learn, statsmodels, scipy

๐Ÿ“ฆ Installation

pip install -U pandas numpy scikit-learn statsmodels scipy

Clone this repo or copy AutoFeatSelect into your project.


๐Ÿ› ๏ธ How to Use

from lazybrains import AutoFeatSelect

selector = AutoFeatSelect(
    target_col='target',     # Optional if you want supervised feature selection
    verbose=True             # Optional for progress logs
)

# Fit + transform in one line
df_cleaned = selector.fit_transform(df, drop=True)

# Or separately
selector.fit(df)
df_cleaned = selector.transform(df)

# See what got dropped and why
report = selector.get_report()
print(report)

๐Ÿง  When to Use

  • Before training ML models, especially with many features
  • When data has potential noise, ID columns, or redundancy
  • To reduce overfitting and improve model interpretability
  • During automated pipelines or pre-model sanity checks

๐Ÿ“ Example Output

[AutoFeatSelect] Running: Drop high missing values...
[AutoFeatSelect]   Dropped: ['unimportant_column']
[AutoFeatSelect] Running: Drop single value columns...
[AutoFeatSelect]   Dropped: ['constant_feature']
...
[AutoFeatSelect] Finished selection. Kept 22 out of 48 features.

๐Ÿ“Š Feature Drop Criteria

Technique Purpose
Missing Ratio Drops features with mostly nulls
Unique Ratio (ID-like) Removes fake IDs or row-wise unique cols
Variance Threshold Removes constant or near-constant cols
Pearson Correlation Drops highly correlated pairs
Hierarchical Clustering Smarter groupwise redundancy pruning
VIF (Variance Inflation) Drops multicollinear features
Mutual Information Measures info contribution to target
Tree Importance Uses ExtraTrees to measure signal power

๐Ÿค Author

Built by Gemini Version: 1.0.0


โค๏ธ Contribute / Fork

Feel free to fork and extend this library โ€” make it smarter, add plotting, or wrap it into a full AutoML pipeline!


๐Ÿ”“ License

MIT โ€” Use freely, just don't claim it's yours ๐Ÿ˜„


Let me know if you want a logo, GitHub structure, or demo notebook too! ๐Ÿ“๐Ÿ“ˆ

๐Ÿ” AutoEDAPro

AutoEDAPro is a powerful, plug-and-play Python library for automated Exploratory Data Analysis (EDA).
It takes a pandas DataFrame and gives you a full, beautiful report โ€” with stats, visuals, and deep insights โ€” either inline (Jupyter) or as an HTML file.


๐Ÿš€ Features

  • ๐Ÿ“ฆ One-line EDA: Pass a DataFrame, get full analysis
  • ๐Ÿ” Missing values, constant features, outliers detection
  • ๐Ÿ“Š Univariate & Bivariate visualizations (histograms, boxplots, KDE, correlation heatmaps)
  • ๐ŸŽฏ Optional target column analysis for classification & regression
  • ๐Ÿ“ HTML report export with optional logging
  • โœ… Jupyter inline display or standalone HTML output
  • โœจ Built using pandas, seaborn, matplotlib, plotly, numpy

๐Ÿ“ฆ Installation

First, make sure you have Python 3.7+

Install required dependencies:

pip install pandas numpy matplotlib seaborn plotly scikit-learn

Hereโ€™s a complete README.md ๐Ÿ“˜ for your AutoEDAPro library that covers everything a user needs:



๐Ÿงช Example Usage

from autoeda import AutoEDA
import seaborn as sns

# Load sample dataset
df = sns.load_dataset('titanic')

# Run EDA inline (Jupyter)
eda = AutoEDA(target_col='survived')
eda.run(df)

# Run EDA and save report as HTML with logging
eda_html = AutoEDA(target_col='survived', save_report=True, enable_logging=True)
eda_html.run(df)

You can also test the library via CLI by running the script directly:

python autoeda.py

It will:

  • Try to load Titanic dataset via seaborn
  • Fall back to a dummy dataset if that fails
  • Run both inline and saved HTML reports

๐Ÿง  Parameters

Parameter Type Default Description
target_col str None Target column for supervised EDA
save_report bool False If True, saves output as HTML
output_filename str None Custom filename for saved HTML
enable_logging bool False If True, creates a log of EDA steps

๐Ÿ“ Output

  • Inline Display: Shows report directly in Jupyter notebooks
  • HTML Report: If save_report=True, saves full interactive report with visualizations

๐Ÿ›  Structure

Main file: autoeda.py Main class: AutoEDA

Each report contains:

  1. ๐Ÿ“„ DataFrame shape, column types
  2. โ“ Missing values overview
  3. ๐Ÿ” Duplicate/constant columns
  4. ๐Ÿ“Š Univariate plots for all features
  5. โš ๏ธ Outlier detection using IQR
  6. ๐Ÿ”— Bivariate correlation heatmap + pairplots
  7. ๐ŸŽฏ Feature vs Target analysis

โš ๏ธ Notes

  • For full display in script (not Jupyter), report is saved as HTML.
  • Uses Plotly CDN โ€” make sure you're online for full interactivity.
  • Logging is optional but useful for debugging long processes.

๐Ÿ“ฌ License

Free to use and modify. Credits appreciated!


๐Ÿ’ก Ideas for Future

  • Auto feature selection preview
  • Optional modeling report (LazyPredict-style)
  • Model explainability (SHAP, LIME)
  • CLI and web interface

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lazybrains-3.0.0.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lazybrains-3.0.0-py3-none-any.whl (19.1 kB view details)

Uploaded Python 3

File details

Details for the file lazybrains-3.0.0.tar.gz.

File metadata

  • Download URL: lazybrains-3.0.0.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.11

File hashes

Hashes for lazybrains-3.0.0.tar.gz
Algorithm Hash digest
SHA256 199cbd8cbc8e3f1b77f82406bdad680e5da368fd81fd98406cca5b0688f4cffe
MD5 f7e0a134116ebb7b048ca1442360790f
BLAKE2b-256 569a1d00f792d415b28f21f40570bdd982ffbd35ec0ca348fc82a43ceea6a6ce

See more details on using hashes here.

File details

Details for the file lazybrains-3.0.0-py3-none-any.whl.

File metadata

  • Download URL: lazybrains-3.0.0-py3-none-any.whl
  • Upload date:
  • Size: 19.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.11

File hashes

Hashes for lazybrains-3.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9c8435b895657a69c73d64b9c11f59e30926ff585ccf5f76e73f6066f1996297
MD5 6aa42abef3e03665e8e58d23f908e468
BLAKE2b-256 7d5de04f5337385e72f7b50ce70ed8d59a5dba84d718d6ad021ff76764cd8221

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page