Skip to main content

Benchmarking parameter-efficient fine-tuning of EEG foundation models

Project description

🧠 Open EEG Bench

HuggingFace Hub PyPI version Tests Python versions

Benchmark any EEG foundation model with one function call.

✨ Why Open EEG Bench?

  • 🎯 One function, all results — Call benchmark() with your model and get a full evaluation across 12 datasets
  • 📦 Zero preprocessing — All datasets are pre-windowed and hosted on HuggingFace Hub, ready to use
  • 7 fine-tuning strategies — Frozen linear probing, LoRA, IA3, AdaLoRA, DoRA, OFT, and full fine-tuning
  • 🔌 Bring your own model — Any PyTorch model that takes EEG input and returns features works out of the box
  • 🔒 Reproducible by design — A single config object fully describes a run. No YAML files, no hidden state

🚀 Installation

pip install open-eeg-bench

🏁 Benchmark your model

from open_eeg_bench import benchmark

results = benchmark(
    model_cls="my_package.MyModel",  # import path to your model class
    checkpoint_url="https://my-weights.pth",
)

print(results)  # pd.DataFrame

This runs linear probing on all 12 datasets and returns a DataFrame with one row per result.

You can pick specific datasets, fine-tuning strategies, classification heads and number of initialization seeds:

results = benchmark(
    model_cls="my_package.MyModel",
    checkpoint_url="https://my-weights.pth",
    datasets=["arithmetic_zyma2019", "bcic2a", "physionet"],
    finetuning_strategies=["frozen", "lora"],
    peft_target_modules=[  # necessary for LoRA, IA3, AdaLoRA, OFT, and DoRA
        "encoder.linear1", 
        "encoder.linear2"
    ],
    heads=["linear_head", "mlp_head"],
    n_seeds=5,
)

Need to run on a SLURM cluster? No sbatch scripts needed — see Running on a cluster.

📐 Model requirements

Your model only needs to:

  1. Accept input of shape (batch, n_chans, n_times)
  2. Return output of shape (batch, n_outputs)
  3. Have a named module for the classification head (default: self.final_layer)

⚙️ benchmark() parameters

Parameter Required Description
model_cls Yes Dotted import path to your model class
hub_repo One of three HuggingFace Hub repo ID for weights
checkpoint_url One of three URL to pretrained weights
checkpoint_path One of three Local path to pretrained weights
peft_target_modules For PEFT Module names to adapt (e.g. ["to_q", "to_k", "to_v"])
peft_ff_modules For IA3 Feedforward module names for IA3 adapter
model_kwargs No Extra kwargs for the model constructor
head_module_name No Name of the head module (default: "final_layer")
normalization No Post-window normalization
datasets No Dataset names to evaluate on (default: all 12)
heads No Head names: "linear_head", "mlp_head", "original_head" (default: ["linear_head"])
finetuning_strategies No Strategy names (default: ["frozen"])
n_seeds No Number of random seeds (default: 3)
device No "cpu", "cuda", etc. (default: "cpu")
infra No Infrastructure config for caching and cluster submission (see cluster docs)
max_workers No Max simultaneous SLURM jobs (default: 256)

📊 Available datasets

All 12 datasets are pre-windowed and hosted on HuggingFace Hub:

Dataset HF ID Classes Window size Task
Arithmetic (Zyma 2019) braindecode/arithmetic_zyma2019 2 5 s Mental arithmetic vs. rest
BCI Competition IV 2a braindecode/bcic2a 4 4 s Motor imagery
BCI Competition 2020-3 braindecode/bcic2020-3 5 3 s Imagined speech
PhysioNet MI braindecode/physionet 4 3 s Motor imagery
CHB-MIT braindecode/chbmit 2 10 s Seizure detection
FACED braindecode/faced 9 10 s Emotion recognition
ISRUC-Sleep braindecode/isruc-sleep 5 30 s Sleep staging
MDD (Mumtaz 2016) braindecode/mdd_mumtaz2016 2 5 s Depression detection
SEED-V braindecode/seed-v 5 1 s Emotion recognition
SEED-VIG braindecode/seed-vig regression 8 s Vigilance estimation
TUAB braindecode/tuab 2 10 s Abnormal EEG detection
TUEV braindecode/tuev 6 5 s EEG event classification

Preprocessing: Window lengths are dataset-dependant (see table above). All datasets are high-pass filtered at 0.1 Hz, except for tasks with short trial windows (2 s or less), where we use 0.5 Hz. All datasets are resampled to 100 Hz. Model-specific normalization (e.g. z-scoring) can be applied via the normalization parameter.

🧩 Available fine-tuning strategies

Strategy Description Trainable params
Frozen() Freeze encoder, train only the head ~0.01%
LoRA(r, alpha) Low-Rank Adaptation ~1-5%
IA3() Inhibiting and Amplifying Inner Activations ~0.1%
AdaLoRA(r, target_r) Adaptive rank allocation ~1-5%
DoRA(r, alpha) Weight-Decomposed LoRA ~1-5%
OFT(block_size) Orthogonal Fine-Tuning ~1-10%
FullFinetune() Train all parameters 100%
TwoStages() Frozen head for 10 epochs, then unfreeze and train all 100%

📄 License

BSD-3-Clause (see LICENSE.txt)

📚 Further reading

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

open_eeg_bench-0.2.1.tar.gz (32.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

open_eeg_bench-0.2.1-py3-none-any.whl (32.7 kB view details)

Uploaded Python 3

File details

Details for the file open_eeg_bench-0.2.1.tar.gz.

File metadata

  • Download URL: open_eeg_bench-0.2.1.tar.gz
  • Upload date:
  • Size: 32.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for open_eeg_bench-0.2.1.tar.gz
Algorithm Hash digest
SHA256 d4d77cf9a4253165a6e698b6f82a1bdfd721e43a5e23c65424d2d9896f0df617
MD5 a3bc84f99784460fc6c35964dc3ce4fd
BLAKE2b-256 922993e0e355642de70f8d65a07adef127a4eccb0227a023dbbb666080aa222e

See more details on using hashes here.

Provenance

The following attestation bundles were made for open_eeg_bench-0.2.1.tar.gz:

Publisher: publish.yml on braindecode/OpenEEGBench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file open_eeg_bench-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: open_eeg_bench-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 32.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for open_eeg_bench-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7c4572902102eb7a757507694469d1c7da85fb77a84227e558f72cf039ae512c
MD5 8d6ab17fe4c28e367c3b9b72850ecfed
BLAKE2b-256 e338b350b39146cde3edb33f9ca7109bbbcf8648627fd0019890c24aa835c9e6

See more details on using hashes here.

Provenance

The following attestation bundles were made for open_eeg_bench-0.2.1-py3-none-any.whl:

Publisher: publish.yml on braindecode/OpenEEGBench

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page