Skip to main content

Evaluation and Benchmark Tool for Feature Selection

Project description

FSEval – Feature Selection Evaluation Suite

FSEval is a lightweight, modular Python library designed to benchmark feature selection and feature ranking methods across multiple datasets using both supervised and unsupervised downstream evaluation protocols.

It helps researchers and practitioners answer the question:

"Which feature selection method actually works best for my type of data and task?"

FSEval automates:

  • Repeated training & evaluation at different feature subset sizes
  • Stochastic method averaging
  • Result persistence & incremental updates
  • Support for both classification and clustering-based evaluation

📦 Dependencies and Requirements

FSEval requires:

  • python>=3.8
  • numpy
  • pandas
  • scikit-learn
  • scipy
  • clustpy (only needed for unsupervised_clustering_accuracy)

💡 Installation

You can just download the source code and import fseval, or you can install it using pip:

pip install sdufseval

🚀 Quick Example

from sdufseval import FSEVAL
import numpy as np

if __name__ == "__main__":

    # The 23 real datasets
    DATASETS_TO_RUN = [
        'ALLAML', 'CLL_SUB_111', 'COIL20', 'Carcinom', 'GLIOMA', 'GLI_85', 
        'Isolet', 'ORL', 'Prostate_GE', 'SMK_CAN_187', 'TOX_171', 'Yale', 
        'arcene', 'colon', 'gisette', 'leukemia', 'lung', 'lung_discrete', 
        'madelon', 'orlraws10P', 'pixraw10P', 'warpAR10P', 'warpPIE10P'
    ]

    # Initialize FSEVAL
    evaluator = FSEVAL(output_dir="benchmark_results", avg_steps=10)

    # Configuration for methods using the class internal random_baseline
    methods_list = [
        {
            'name': 'Random', 
            'stochastic': True, 
            'func': evaluator.random_baseline
        },
        {
            'name': 'Variance_Baseline', 
            'stochastic': False, 
            'func': lambda X: np.var(X, axis=0)
        }
    ]
    
    # Run Benchmark (Defaults to RF)
    evaluator.run(DATASETS_TO_RUN, methods_list)

Data Loading

load_dataset(dataset_name, data_dir="datasets") supports:

  • Single .mat file with keys 'X' and 'Y'
  • Two CSV files: {name}_X.csv and {name}_y.csv

📚 API Reference

🛠️ FSEval(output_dir="results", cv=5, avg_steps=10, eval_type="both", metrics=None, experiments=None)

Initializes the evalutation and benchmark object.

Parameter Default Description
output_dir results Folder where CSV result files are saved.
cv 5 Cross-validation folds (supervised only).
avg_steps 10 Number of random restarts / seeds to average over.
eval_type both Number of random restarts / seeds to average over.
metrics ["CLSACC", "NMI", "ACC", "AUC"] "supervised", "unsupervised", or "both".
experiments ["10Percent", "100Percent"] Which feature ratio grids to evaluate.

⚙️ run(datasets, methods, classifier=None)

Initializes the evalutation and benchmark object.

Argument Type Description
datasets List[str] Dataset names loadable via load_dataset().
methods List[dict] "[{""name"": str, ""func"": callable, ""stochastic"": bool}, ...]"
classifier sklearn classifier Classifier for supervised eval (default: RandomForestClassifier)

Dashboard

There is a Feature Selection Evaluation Dashboard based on the benchmarks provided by FSEVAL, available on:

https://fseval.imada.sdu.dk/

The dashboard offers a collection of useful analytic tools to provide comprehensive and comparative insights into the performance of your feature selection method(s).

Citation

If you use FSEVAL in your research, please cite the original paper:

CITATION WILL BE PROVIDED UPON PUBLICATION.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sdufseval-1.0.1.tar.gz (6.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sdufseval-1.0.1-py3-none-any.whl (7.2 kB view details)

Uploaded Python 3

File details

Details for the file sdufseval-1.0.1.tar.gz.

File metadata

  • Download URL: sdufseval-1.0.1.tar.gz
  • Upload date:
  • Size: 6.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for sdufseval-1.0.1.tar.gz
Algorithm Hash digest
SHA256 ea0ac6e4926fc7ce8ac790624ea40dfe9cc716124c3af75874fa8209b92a8870
MD5 bc6997d94d05ba904df3c0a39aeb1719
BLAKE2b-256 d6d1e8acf3fbec9d4938979f618457db98dd85499eae3bda8d061a5da6cd2485

See more details on using hashes here.

File details

Details for the file sdufseval-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: sdufseval-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 7.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.12

File hashes

Hashes for sdufseval-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5aafed037c435cb343168b172f60dcd4edb18a8ca021f206b7a71a6f79c0483e
MD5 2000c069d8be87981c81477f66dd974c
BLAKE2b-256 afda2d69ee702c355ec981c8a49bcfbec9cd22d88f604fc59441f7bd720d8a07

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page