Brainmarks: fMRI foundation model evaluation suite
Project description
Brainmarks
Brainmarks is an open evaluation suite for fMRI foundation models.
Installation
pip install brainmarks
# or
uv add brainmarks
Model wrappers for third-party encoders are optional extras:
pip install "brainmarks[brain-jepa,brainlm,swift,brainharmonix,brain-semantoks,neurostorm]"
To install the latest development version from GitHub:
pip install "brainmarks @ git+https://github.com/MedARC-AI/brainmarks"
From source:
git clone https://github.com/MedARC-AI/brainmarks
cd brainmarks
uv sync --python 3.11
Usage
Brainmarks has two main evaluation modes.
Probe — trains a frozen-backbone classifier head (linear, attention, or MLP):
python -m brainmarks.main_probe <model> <representation> <classifier> <dataset>
# e.g.
python -m brainmarks.main_probe brainlm_vitmae_111m patch attn nsd_cococlip
Logistic — extracts embeddings once and fits a logistic regression:
python -m brainmarks.main_logistic <model> <representation> <dataset>
# e.g.
python -m brainmarks.main_logistic brainlm_vitmae_111m patch aabc_sex
representation selects which embedding type the model exposes to the head: cls, reg (registers), or patch. Pass --help to either command to see the full list of available models and datasets. Use --config to pass a YAML config file and --overrides key=value for per-run overrides.
# e.g.
python -m brainmarks.main_logistic \
brainlm_vitmae_111m \
patch \
aabc_sex \
--overrides \
batch_size=16 \
num_workers=4 \
device=cpu
All available options are documented in the default configs: default_probe.yaml, default_logistic.yaml.
Datasets
Benchmark datasets are distributed in Huggingface Arrow format hosted in the MedARC R2 bucket. To request access, fill out this form.
Once you have credentials, configure them as environment variables:
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_ENDPOINT_URL_S3=... # Cloudflare R2 endpoint
Datasets are downloaded automatically on first use and saved in the Huggingface dataset cache.
Adding a model
Brainmarks uses namespace package plugin discovery. To add a model from your own repo without modifying this one:
-
Install
brainmarksas a dependency in your project environment. -
Create a
brainmarksnamespace package in your repo:mkdir -p my_repo/src/brainmarks/models
-
Copy
src/brainmarks/models/template.pyas a starting point and implementModelWrapper,ModelTransform, and a@register_modelconstructor. -
Validate with the smoke test:
python -m brainmarks.models.test_models my_model
See template.py for more details.
Adding a dataset
Adding a dataset involves two parts: curation scripts that preprocess raw data into Arrow shards, and a loader module that registers the dataset with Brainmarks.
Curation scripts live in datasets/, one subdirectory per source dataset. See datasets/HCP-YA/ for a reference example — it contains metadata, preprocessing scripts, and a README describing the raw data layout and curation steps.
Loader modules live in src/brainmarks/datasets/. Each module defines one or more functions decorated with @register_dataset that load Arrow shards (local or from S3) into an HFDataset. See src/brainmarks/datasets/hcpya.py as a reference.
Dataset loader modules are discovered via the same namespace package plugin mechanism as models, so they can live in an external repo.
Support
For help with any issues, reach out to us on MedARC Discord in the #neuro-fm channel.
Citation
@article{lane2025scaling,
title = {Scaling Vision Transformers for Functional {MRI} with Flat Maps},
author = {Lane, Connor and Tripathy, Mihir and Murali, Leema Krishna and
Grandhi, Ratna Sagari and Yang, Shamus Sim Zi and Gijsen, Sam and
Das, Debojyoti and Ram, Manish and Singh, Utkarsh Kumar and
Villanueva, Cesar Kadir Torrico and Wei, Yuxiang and Beddow, Will and
Cort\'{e}s, Gianfranco and Cho, Suin and Kaplan, Daniel Z. and
Warner, Benjamin and Abraham, Tanishq Mathew and Scotti, Paul S.},
journal = {arXiv preprint arXiv:2510.13768},
year = {2025},
url = {https://arxiv.org/abs/2510.13768}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file brainmarks-0.1.0.tar.gz.
File metadata
- Download URL: brainmarks-0.1.0.tar.gz
- Upload date:
- Size: 64.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
612328568edae23846bc917e214d5b6aea7fde6cae46907cb292006070669584
|
|
| MD5 |
ae9e5803a87c84f5e58c018628c537ca
|
|
| BLAKE2b-256 |
404fea1a6853793348a43c0fb94aeaec298eebbdfba64bfa6bde42c2b1bd09c4
|
File details
Details for the file brainmarks-0.1.0-py3-none-any.whl.
File metadata
- Download URL: brainmarks-0.1.0-py3-none-any.whl
- Upload date:
- Size: 83.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5e6addaaf099f32aedd5697962d7d6d06db036b52a44d7c76011673b894385ae
|
|
| MD5 |
5a00ec6ff1b6c2b337b4da04f70899d0
|
|
| BLAKE2b-256 |
014bf22c45ea0548d7da15c34e67194a818334aaa247fb39b384d24299751a81
|