Skip to main content

Benchmarking framework for machine learning with fNIRS

Project description

BenchNIRS

BenchNIRS

Benchmarking framework for machine learning with fNIRS

Quick links
Journal article
BenchNIRS repository
Install BenchNIRS
Documentation
Issue tracker

DOI License Pipeline PyPI version Downloads

Example of figure

Documentation

The documentation of the framework can be found here: https://hanbnrd.gitlab.io/benchnirs.

Recommendation checklist

A checklist of recommendations towards good practice for machine learning with fNIRS (for brain-computer interface applications) can be found here. We welcome contributions from the community in order to improve it, please see below for more information on how to contribute.

Minimum tested requirements

Python 3.8 with the following libraries:

Setup

Download and install Python 3.8 or greater. During the installation process, add Python to the path for more simplicity.

Download and unzip the BenchNIRS repository.

In a terminal or command prompt, navigate to the directory containing the requirements.txt file and run:

python -m pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html

Download the datasets:

  • Herff et al. 2014: you can download the dataset by making a request here. In the examples, the unzipped folder has been renamed to dataset_herff_2014 for clarity.
  • Shin et al. 2018: you can download the dataset here. In the examples, the unzipped folder has been renamed to dataset_shin_2018 for clarity.
  • Shin et al. 2016: you can download the dataset by filling the form here. Then click on NIRS_01-29 to download the fNIRS data. In the examples, the unzipped folder has been renamed to dataset_shin_2016 for clarity.
  • Bak et al. 2019: you can download the dataset here. In the examples, the unzipped folder has been renamed to dataset_bak_2019 for clarity.

Alternatively, the BenchNIRS library containing the core functions (without main scripts) is available on PyPI and can be installed using pip:

pip install benchnirs

and updated to the newest version with:

pip install --upgrade benchnirs

Running main scripts

  • generalised.py compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 5 datasets with a generalised approach (testing with unseen subjects)
  • dataset_size.py reproduces generalised.py but with a range different dataset sizes (50% to 100% of dataset) to study the influence of this parameter on the classification accuracy
  • window_size.py reproduces generalised.py but with only the 4 models using feature extraction (LDA, SVC, kNN and ANN) and with a range different window sizes (2 to 10 sec) to study the influence of this parameter on the classification accuracy
  • sliding_window.py reproduces generalised.py but with only the 4 models using feature extraction (LDA, SVC, kNN and ANN) and with a 2 sec sliding window on the 10 sec epochs
  • personalised.py compares the 6 models (LDA, SVC, kNN, ANN, CNN and LSTM) on the 5 datasets with a personalised approach (training and testing with each subject individually)
  • visualisation.py enables to visualise the data from the datasets with various signal processing

Example

An example script showing how to use the framework with a custom deep learning model can be found here: https://hanbnrd.gitlab.io/benchnirs/example.html.

Simple use case

BenchNIRS enables to evaluate your model in Python with simplicity on an open access dataset supported:

import benchnirs as bn

epochs = bn.load_dataset('shin_2018_nb')
data = bn.process_epochs(epochs['0-back', '2-back', '3-back'])
results = bn.deep_learn(*data, my_model)

print(results)

Contributing to the repository

Contributions from the community to this repository are highly appreciated. We are mainly interested in contributions to:

  • improving the recommendation checklist
  • adding more fNIRS signal processing techniques
  • adding support for new open access datasets
  • tracking bugs

Contributions are encouraged under the form of issues (for reporting bugs or requesting new features) and merge requests (for fixing bugs and implementing new features). Please refer to this tutorial for creating merge requests from a fork of the repository.

Citing us

Please cite us if you are using this framework:

@article{benerradi2023benchmarking,
  title={Benchmarking framework for machine learning classification from fNIRS data},
  author={Benerradi, Johann and Clos, Jeremie and Landowska, Aleksandra and Valstar, Michel F and Wilson, Max L},
  journal={Frontiers in Neuroergonomics},
  volume={4},
  year={2023},
  publisher={Frontiers Media},
  url={https://www.frontiersin.org/articles/10.3389/fnrgo.2023.994969},
  doi={10.3389/fnrgo.2023.994969},
  issn={2673-6195}
}

If you are using the datasets of the framework, please also cite those related works.

Herff et al. 2014:

@article{herff2014mental,
	title={Mental workload during n-back task—quantified in the prefrontal cortex using fNIRS},
	author={Herff, Christian and Heger, Dominic and Fortmann, Ole and Hennrich, Johannes and Putze, Felix and Schultz, Tanja},
	journal={Frontiers in human neuroscience},
	volume={7},
	pages={935},
	year={2014},
	publisher={Frontiers}
}

Shin et al. 2018:

@article{shin2018simultaneous,
	title={Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset},
	author={Shin, Jaeyoung and Von L{\"u}hmann, Alexander and Kim, Do-Won and Mehnert, Jan and Hwang, Han-Jeong and M{\"u}ller, Klaus-Robert},
	journal={Scientific data},
	volume={5},
	pages={180003},
	year={2018},
	publisher={Nature Publishing Group}
}

Shin et al. 2016:

@article{shin2016open,
	title={Open access dataset for EEG+NIRS single-trial classification},
	author={Shin, Jaeyoung and von L{\"u}hmann, Alexander and Blankertz, Benjamin and Kim, Do-Won and Jeong, Jichai and Hwang, Han-Jeong and M{\"u}ller, Klaus-Robert},
	journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering},
	volume={25},
	number={10},
	pages={1735--1745},
	year={2016},
	publisher={IEEE}
}

Bak et al. 2019:

@article{bak2019open,
	title={Open-Access fNIRS Dataset for Classification of Unilateral Finger-and Foot-Tapping},
	author={Bak, SuJin and Park, Jinwoo and Shin, Jaeyoung and Jeong, Jichai},
	journal={Electronics},
	volume={8},
	number={12},
	pages={1486},
	year={2019},
	publisher={Multidisciplinary Digital Publishing Institute}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchnirs-1.1.2.tar.gz (34.1 kB view hashes)

Uploaded Source

Built Distribution

benchnirs-1.1.2-py3-none-any.whl (32.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page