Benchmarking framework for machine learning with fNIRS
Project description
BenchNIRS
Journal article: https://www.frontiersin.org/articles/10.3389/fnrgo.2023.994969
Code: https://gitlab.com/HanBnrd/benchnirs
Benchmarking framework for machine learning with fNIRS
Example of figure produced with BenchNIRS
Recommendation checklist
A checklist of recommendations towards good practice for machine learning with fNIRS (for brain-computer interface applications) can be found here. We welcome contributions from the community in order to improve it, please see below for more information on how to contribute.
Documentation
The documentation of the framework can be found here: https://hanbnrd.gitlab.io/benchnirs.
Minimum tested requirements
Python 3.8 with the following libraries:
- matplotlib 3.3
- mne 0.23
- nirsimple 0.1
- numpy 1.19
- pandas 1.0
- scikit-learn 0.24
- scipy 1.8
- seaborn 0.11
- statsmodels 0.12.2
- torch 1.5
Installing the requirements
Download and install Python 3.8 or greater: https://www.python.org/downloads/. During the installation process, add Python to the path for more simplicity.
In a terminal or command prompt, navigate to the directory containing the requirements.txt
file and run:
python -m pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
Downloading the datasets
Herff et al. 2014
You can download the dataset by making a request here. In the examples, the unzipped folder has been renamed to dataset_herff_2014 for clarity.
Shin et al. 2018
You can download the dataset here. In the examples, the unzipped folder has been renamed to dataset_shin_2018 for clarity.
Shin et al. 2016
You can download the dataset by filling the form here. Then click on NIRS_01-29 to download the fNIRS data. In the examples, the unzipped folder has been renamed to dataset_shin_2016 for clarity.
Bak et al. 2019
You can download the dataset here. In the examples, the unzipped folder has been renamed to dataset_bak_2019 for clarity.
Running main scripts
generalised.py
compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 5 datasets with a generalised approach (testing with unseen subjects)training_size.py
reproducesgeneralised.py
but with a range different training set sizes (50% to 100% of training data) to study the influence of this parameter on the classification accuracywindow_size.py
reproducesgeneralised.py
but with only the 4 models using feature extraction (LDA, SVC, kNN and ANN) and with a range different window sizes (2 to 10 sec) to study the influence of this parameter on the classification accuracysliding_window.py
reproducesgeneralised.py
but with only the 4 models using feature extraction (LDA, SVC, kNN and ANN) and with a 2 sec sliding window on the 10 sec epochspersonalised.py
compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 5 datasets with a personalised approach (training and testing with each subject individually)visualisation.py
enables to visualise the data from the datasets with various signal processing
Example of use
An example script showing how to use the framework with a custom deep learning model can be found here: https://hanbnrd.gitlab.io/benchnirs/example.html.
Contributing to the repository
Contributions from the community to this repository are highly appreciated. We are mainly interested in contributions to:
- improving the recommendation checklist
- adding more fNIRS signal processing techniques
- adding support for new open access datasets
- tracking bugs
Contributions are encouraged under the form of issues (for reporting bugs or requesting new features) and merge requests (for fixing bugs and implementing new features). Please refer to this tutorial for creating merge requests from a fork of the repository.
Citing us
Please cite us if you are using this framework:
@article{benerradi2023benchmarking,
title={Benchmarking framework for machine learning classification from fNIRS data},
author={Benerradi, Johann and Clos, Jeremie and Landowska, Aleksandra and Valstar, Michel F and Wilson, Max L},
journal={Frontiers in Neuroergonomics},
volume={4},
year={2023},
publisher={Frontiers Media},
url={https://www.frontiersin.org/articles/10.3389/fnrgo.2023.994969},
doi={10.3389/fnrgo.2023.994969},
issn={2673-6195}
}
If you are using the datasets of the framework, please also cite those related works.
Herff et al. 2014:
@article{herff2014mental,
title={Mental workload during n-back task—quantified in the prefrontal cortex using fNIRS},
author={Herff, Christian and Heger, Dominic and Fortmann, Ole and Hennrich, Johannes and Putze, Felix and Schultz, Tanja},
journal={Frontiers in human neuroscience},
volume={7},
pages={935},
year={2014},
publisher={Frontiers}
}
Shin et al. 2018:
@article{shin2018simultaneous,
title={Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset},
author={Shin, Jaeyoung and Von L{\"u}hmann, Alexander and Kim, Do-Won and Mehnert, Jan and Hwang, Han-Jeong and M{\"u}ller, Klaus-Robert},
journal={Scientific data},
volume={5},
pages={180003},
year={2018},
publisher={Nature Publishing Group}
}
Shin et al. 2016:
@article{shin2016open,
title={Open access dataset for EEG+NIRS single-trial classification},
author={Shin, Jaeyoung and von L{\"u}hmann, Alexander and Blankertz, Benjamin and Kim, Do-Won and Jeong, Jichai and Hwang, Han-Jeong and M{\"u}ller, Klaus-Robert},
journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering},
volume={25},
number={10},
pages={1735--1745},
year={2016},
publisher={IEEE}
}
Bak et al. 2019:
@article{bak2019open,
title={Open-Access fNIRS Dataset for Classification of Unilateral Finger-and Foot-Tapping},
author={Bak, SuJin and Park, Jinwoo and Shin, Jaeyoung and Jeong, Jichai},
journal={Electronics},
volume={8},
number={12},
pages={1486},
year={2019},
publisher={Multidisciplinary Digital Publishing Institute}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for benchnirs-1.0.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c92c51dd89e1b1b5669e9c7511e3f67bdbbc68f07553eafe20b8a1905fc3176c |
|
MD5 | ee1fd756635299450c6f717a9e3fac7a |
|
BLAKE2b-256 | 4d0ee43f266b35c218088677389214c0d4f05251a1f1394c8152556104c4d18a |