Skip to main content

Fast and Lightweight Assessment of Variables for Optimally-Reduced Subsets towards Feature Learning Automation with Variable-Objective, Resource Scheduling

Project description


FLAVORS2 (Fast and Lightweight Assessment of Variables for Optimally-Reduced Subsets towards Feature Learning Automation with Variable-Objective, Resource Scheduling) is an efficient feature selection library for machine learning. It automates the search for optimal feature subsets under time budgets, supporting custom metrics, multi-objective Pareto optimization, feature priors, and scikit-learn integration.

Install

FLAVORS2 can be installed from PyPI:

pip install flavors-squared

Feature Selection Example (scikit-learn compatible)

FLAVORS2 provides a scikit-learn compatible FLAVORS2FeatureSelector for seamless integration into ML pipelines. Here's an example using the Iris dataset:

from sklearn.datasets import load_iris
from flavors2 import FLAVORS2FeatureSelector

# Load data
data = load_iris()
X, y = data.data, data.target

# Initialize and fit the selector with a 30-second budget
selector = FLAVORS2FeatureSelector(budget=30, random_state=42)
selector.fit(X, y)

# Transform the data to selected features
X_selected = selector.transform(X)

print(f"Selected feature indices: {selector.selected_indices_}")

This performs a budgeted search for the optimal subset using default metrics (AUC for classification or R² for regression).

Advanced Usage

Custom Metrics

Define custom metrics returning a dict with 'score':

from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from flavors2 import FLAVORS2FeatureSelector

def custom_accuracy(X, y, sample_weight=None):
    model = LogisticRegression(max_iter=200)
    scores = cross_val_score(model, X, y, cv=5, scoring='accuracy', fit_params={'sample_weight': sample_weight})
    return {'score': np.mean(scores)}

data = load_iris()
X, y = data.data, data.target

selector = FLAVORS2FeatureSelector(budget=20, metrics=[custom_accuracy], random_state=42)
selector.fit(X, y)

Multiple Metrics with Pareto Optimization

Balance multiple objectives:

from sklearn.datasets import load_iris
from sklearn.metrics import make_scorer, f1_score
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from flavors2 import FLAVORS2FeatureSelector

def accuracy_metric(X, y, sample_weight=None):
    model = LogisticRegression(max_iter=200)
    scores = cross_val_score(model, X, y, cv=5, scoring='accuracy', fit_params={'sample_weight': sample_weight})
    return {'score': np.mean(scores)}

def f1_metric(X, y, sample_weight=None):
    model = LogisticRegression(max_iter=200)
    scorer = make_scorer(f1_score, average='macro')
    scores = cross_val_score(model, X, y, cv=5, scoring=scorer, fit_params={'sample_weight': sample_weight})
    return {'score': np.mean(scores)}

data = load_iris()
X, y = data.data, data.target

selector = FLAVORS2FeatureSelector(budget=30, metrics=[accuracy_metric, f1_metric], random_state=42)
selector.fit(X, y)

print("Pareto history:", selector.selector.pareto_history)

Time Budgeting

Control search duration:

from sklearn.datasets import load_breast_cancer
from flavors2 import FLAVORS2FeatureSelector

data = load_breast_cancer()
X, y = data.data, data.target

selector_short = FLAVORS2FeatureSelector(budget=10, random_state=42)
selector_short.fit(X, y)
print(f"Features selected in 10s: {len(selector_short.selected_indices_)}")

Feature Priors

Guide selection with priors:

from sklearn.datasets import load_iris
import numpy as np
from flavors2 import FLAVORS2FeatureSelector

data = load_iris()
X, y = data.data, data.target
priors = np.array([0.1, 0.1, 0.9, 0.9])

selector = FLAVORS2FeatureSelector(budget=20, feature_priors=priors, random_state=42)
selector.fit(X, y)

Feature Importance Weighting with a Model

Incorporate model importances:

# ✅ Correct way: return a fitted model so FLAVORS2 can read its importances.
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from flavors2 import FLAVORS2FeatureSelector
import numpy as np

def rf_metric_with_model(X, y, sample_weight=None):
    # 1) Fit once on the current subset to expose feature_importances_
    est = RandomForestClassifier(n_estimators=200, random_state=42)
    fit_kwargs = {}
    if sample_weight is not None:
        fit_kwargs["sample_weight"] = sample_weight
    est.fit(X, y, **fit_kwargs)

    # 2) Score via CV (fresh clones); keep score pure (no “importance bonus”)
    cv_est = RandomForestClassifier(n_estimators=200, random_state=42)
    scores = cross_val_score(cv_est, X, y, cv=5, scoring="accuracy")
    return {"score": float(np.mean(scores)), "model": est}

selector = FLAVORS2FeatureSelector(
    budget=30,
    metrics=[rf_metric_with_model],
    boruta=True,            # optional: adds Boruta-style shadow checks
    random_state=42,
)

Benchmarks

FLAVORS2 outperforms baselines in rankings and wins:

Score Advantage

Figure 1. Distribution of FLAVORS² score advantage over baselines like GJO, RFE, and Permutation.

To visualize the Pareto frontier of performance vs. runtime across datasets:

Pareto Frontier

Figure 2. Pareto frontier showing trade-off between performance and runtime across datasets.

See assets/h2h_benchmark_summary.csv for details.

Documentation

Citations

For use of FLAVORS2, cite the repository or relevant papers if applicable.

License

MIT License

Copyright (c) 2025 Michael Mech

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flavors_squared-0.1.1.tar.gz (228.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flavors_squared-0.1.1-py3-none-any.whl (22.3 kB view details)

Uploaded Python 3

File details

Details for the file flavors_squared-0.1.1.tar.gz.

File metadata

  • Download URL: flavors_squared-0.1.1.tar.gz
  • Upload date:
  • Size: 228.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for flavors_squared-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e8360a60d0ebcda296ff54916ab76a1d2f099eb5a528782ea87363f63de680e3
MD5 a702c5bcca4e49776522ace50406bc16
BLAKE2b-256 c9565eafc8b42b64470c8572f4fd1883619ded696efd188589ad16efd8e45c01

See more details on using hashes here.

Provenance

The following attestation bundles were made for flavors_squared-0.1.1.tar.gz:

Publisher: build-and-publish.yml on michaelmech/FLAVORS2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file flavors_squared-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for flavors_squared-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 009e2970697dfb2d07cc5c0be15f7cf643b2117398bd155ee3239a0f54757690
MD5 99e7589c62b4143a161f3580692ea913
BLAKE2b-256 ab53f5af88be28d266918fb555cf35db9340ec03ab57d6b84bf801abd3bb6888

See more details on using hashes here.

Provenance

The following attestation bundles were made for flavors_squared-0.1.1-py3-none-any.whl:

Publisher: build-and-publish.yml on michaelmech/FLAVORS2

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page