Benchmarking framework for machine learning with fNIRS
Project description
BenchNIRS
Benchmarking framework for machine learning with fNIRS
Quick links
→ Journal article
→ BenchNIRS repository
→ Install BenchNIRS
→ Documentation
→ Issue tracker
Features
- loading of open access datasets
- signal processing and feature extraction on fNIRS data
- training, optimisation and evaluation of machine learning models (including deep learning)
- production of training graphs, metrics and other useful figures for evaluation
- benchmarking and comparison of machine learning models
- supervised, self-supervised and transfer learning
- much more!
Documentation
The documentation of the framework with examples can be found here.
Recommendation checklist
A checklist of recommendations towards good practice for machine learning with fNIRS (for brain-computer interface applications) can be found here. We welcome contributions from the community in order to improve it, please see below for more information on how to contribute.
Minimum tested requirements
Python 3.8 with the following libraries:
- matplotlib 3.3
- mne 0.23
- nirsimple 0.1
- numpy 1.19
- pandas 1.0
- scikit-learn 0.24
- scipy 1.8
- seaborn 0.11
- statsmodels 0.12.2
- torch 1.5
Setting up BenchNIRS
-
Download and install Python 3.8 or greater, for example with Miniconda.
-
To install the package with pip (cf. PyPI), open a terminal (eg. Anaconda Prompt) and type:
pip install benchnirs
Alternatively to install from source, download and unzip the repository. Then, in a terminal or command prompt (eg. Anaconda Prompt), navigate to the directory containing the
requirements.txt
file and run:python -m pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
- Download the datasets (see below).
Downloading the datasets
- Herff et al. 2014 (n-back task): you can download the dataset by making a request here. In the examples, the unzipped folder has been renamed to dataset_herff_2014 for convenience.
- Shin et al. 2018 (n-back and word generation tasks): you can download the dataset here. In the examples, the unzipped folder has been renamed to dataset_shin_2018 for convenience.
- Shin et al. 2016 (mental arithmetic task): you can download the dataset by filling the form here. Then click on NIRS_01-29 to download the fNIRS data. In the examples, the unzipped folder has been renamed to dataset_shin_2016 for convenience.
- Bak et al. 2019 (motor execution task): you can download the dataset here. In the examples, the unzipped folder has been renamed to dataset_bak_2019 for convenience.
Keeping BenchNIRS up to date
To update BenchNIRS to the latest version with pip, open a terminal (eg. Anaconda Prompt) and type:
pip install --upgrade benchnirs
Example
A full example script showing how to use the framework with a custom deep learning model can be found here.
Simple use case
BenchNIRS enables to evaluate your model in Python with simplicity on an open access dataset supported:
import benchnirs as bn
epochs = bn.load_dataset('shin_2018_nb')
data = bn.process_epochs(epochs['0-back', '2-back', '3-back'])
results = bn.deep_learn(*data, my_model)
print(results)
Running main scripts
generalised.py
compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 5 datasets with a generalised approach (testing with unseen subjects)dataset_size.py
reproducesgeneralised.py
but with a range of different dataset sizes (50% to 100% of dataset) to study the influence of this parameter on the classification accuracywindow_size.py
reproducesgeneralised.py
but with only the 4 models using feature extraction (LDA, SVC, kNN and ANN) and with a range of different window sizes (2 to 10 seconds) to study the influence of this parameter on the classification accuracysliding_window.py
reproducesgeneralised.py
but with only the 4 models using feature extraction (LDA, SVC, kNN and ANN) and with a 2-second sliding window on the 10-second epochspersonalised.py
compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 5 datasets with a personalised approach (training and testing with each subject individually)visualisation.py
enables to visualise the data from the datasets with various signal processing
Extra scripts: n-back tailored
tailored_generalised.py
compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 2 n-back datasets with a generalised approach (testing with unseen subjects)tailored_window_size.py
reproducestailored_generalised.py
but with only 5 models (LDA, SVC, kNN, ANN and LSTM) and with a range of different window sizes (5 to 40 seconds) to study the influence of this parameter on the classification accuracytailored_shin_nb.py
optimises and evaluates a tailored CNN on the Shin et al. 2018 n-back dataset with a generalised approach (testing with unseen subjects)
Extra scripts: transfer learning
transfer.py
optimises and evaluates a transfer learning model (pretext self-supervised representation learning task with unlabelled and labelled data using a CED, downstream supervised n-back classification task with labelled data) on the Shin et al. 2018 n-back dataset with a generalised approach (testing with unseen subjects)transfer_no_unlab.py
reproducestransfer.py
but with only labelled data for the pretext task.
Contributing to the repository
Contributions from the community to this repository are highly appreciated. We are mainly interested in contributions to:
- improving the recommendation checklist
- adding support for new open access datasets
- adding support for new machine learning models
- adding more fNIRS signal processing techniques
- improving the documentation
- tracking bugs
Contributions are encouraged under the form of issues (for reporting bugs or requesting new features) and merge requests (for fixing bugs and implementing new features). Please refer to this tutorial for creating merge requests from a fork of the repository.
Acknowledgements
If you are using BenchNIRS, please cite this article:
@article{benerradi2023benchmarking,
title={Benchmarking framework for machine learning classification from fNIRS data},
author={Benerradi, Johann and Clos, Jeremie and Landowska, Aleksandra and Valstar, Michel F and Wilson, Max L},
journal={Frontiers in Neuroergonomics},
volume={4},
year={2023},
publisher={Frontiers Media},
url={https://www.frontiersin.org/articles/10.3389/fnrgo.2023.994969},
doi={10.3389/fnrgo.2023.994969},
issn={2673-6195}
}
If you are using the datasets of the framework, please also cite those related works.
Herff et al. 2014:
@article{herff2014mental, title={Mental workload during n-back task—quantified in the prefrontal cortex using fNIRS}, author={Herff, Christian and Heger, Dominic and Fortmann, Ole and Hennrich, Johannes and Putze, Felix and Schultz, Tanja}, journal={Frontiers in human neuroscience}, volume={7}, pages={935}, year={2014}, publisher={Frontiers} }
Shin et al. 2018:
@article{shin2018simultaneous, title={Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset}, author={Shin, Jaeyoung and Von L{\"u}hmann, Alexander and Kim, Do-Won and Mehnert, Jan and Hwang, Han-Jeong and M{\"u}ller, Klaus-Robert}, journal={Scientific data}, volume={5}, pages={180003}, year={2018}, publisher={Nature Publishing Group} }
Shin et al. 2016:
@article{shin2016open, title={Open access dataset for EEG+NIRS single-trial classification}, author={Shin, Jaeyoung and von L{\"u}hmann, Alexander and Blankertz, Benjamin and Kim, Do-Won and Jeong, Jichai and Hwang, Han-Jeong and M{\"u}ller, Klaus-Robert}, journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering}, volume={25}, number={10}, pages={1735--1745}, year={2016}, publisher={IEEE} }
Bak et al. 2019:
@article{bak2019open, title={Open-Access fNIRS Dataset for Classification of Unilateral Finger-and Foot-Tapping}, author={Bak, SuJin and Park, Jinwoo and Shin, Jaeyoung and Jeong, Jichai}, journal={Electronics}, volume={8}, number={12}, pages={1486}, year={2019}, publisher={Multidisciplinary Digital Publishing Institute} }
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for benchnirs-1.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b8d0c97c4dfe15b4667a711d89e0f6e05c3960b4a38e0e9684e877352d9800d6 |
|
MD5 | 6cc8a9796a551b5214d717d3e37b6822 |
|
BLAKE2b-256 | 37df3033996b337ba8ff0e69ab3d39e30958ef2d65455e2910861a134062d142 |