No project description provided
Project description
FALCON Benchmark and Challenge
This package contains core code for submitting decoders to the FALCON challenge. For a more general overview of FALCON, please see the main website.
Installation
Install falcon_challenge
with:
pip install falcon-challenge
To create Docker containers for submission, you must have Docker installed. See, e.g. https://docs.docker.com/desktop/install/linux-install/.
Getting started
Data downloading
The FALCON datasets are available on DANDI (H1, H2, M1, M2, B1). H1 and H2 are human intractorical brain-computer interface (iBCI) datasets, M1 and M2 are monkey iBCI datasets, and B1 is a songbird iBCI dataset. You can download them individually by going to their DANDI pages to find their respective DANDI download commands, or you can run ./download_falcon_datasets.sh
from project root.
Data from each dataset is broken down as follows:
- Held-in
- Data from the first several recording sessions.
- All non-evaluation data is released and split into calibration (large portion) and minival (small portion) sets.
- Held-in calibration data is intended to train decoders from scratch.
- Minival data enables validation of held-in decoders and submission debugging.
- Held-out:
- Data from the latter several recording sessions.
- A small portion of non-evaluation data is released for calibration.
- Held-out calibration data is intentionally small to discourage training decoders from scratch on this data and provides an opportunity for few-shot recalibration.
Some of the sample code expects your data directory to be set up in ./data
. Specifically, the following hierarchy is expected:
data
h1
held_in_calib
held_out_calib
minival
(Copy dandiset minival folder into this folder)
h2
held_in_calib
held_out_calib
minival
(Copy dandiset minival folder into this folder)
m1
sub-MonkeyL-held-in-calib
sub-MonkeyL-held-out-calib
minival
(Copy dandiset minival folder into this folder)
m2
held_in_calib
held_out_calib
minival
(Copy dandiset minival folder into this folder)
Each of the lowest level dirs holds the data files (in Neurodata Without Borders (NWB) format). Data from some sessions is distributed across multiple NWB files. Some data from each file is allocated to calibration, minival, and evaluation splits as appropriate.
Code
This codebase contains starter code for implementing your own method for the FALCON challenge.
- The
falcon_challenge
folder contains the logic for the evaluator. Submitted solutions must conform to the interface specified infalcon_challenge.interface
. - In
data_demos
, we provide notebooks that survey each dataset released as part of this challenge. - In
decoder_demos
, we provide sample decoders and baselines that are formatted to be ready for submission to the challenge. To use them, see the comments in the header of each file ending in_sample.py
. Your solutions should look similar once implemented! (Namely, you should have a_decoder.py
file or class which conforms tofalcon_challenge.inferface
as well as a_sample.py
file that is the entry point for running your decoder.)
For example, you can prepare and evaluate a linear decoder by running:
python decoder_demos/sklearn_decoder.py --training_dir data/000954/sub-HumanPitt-held-in-calib/ --calibration_dir data/000954/sub-HumanPitt-held-out-calib/ --mode all --task h1
# Should report: CV fit score, 0.26
python decoder_demos/sklearn_sample.py --evaluation local --phase minival --split h1
# Should report: Held In Mean of 0.195
Note: During evaluation, data file names are hashed into unique tags. Submitted solutions receive data to decode along with tags indicating the file from which the data originates in the call to their reset
function. These tags are the keys of the the DATASET_HELDINOUT_MAP
dictionary in falcon_challenge/evaluator.py
. Submissions that intend to condition decoding on the data file from which the data comes should make use of these tags. For an example, see fit_many_decoders
and reset
in decoder_demos/sklearn_decoder.py
.
Docker Submission
To interface with our challenge, your code will need to be packaged in a Docker container that is submitted to EvalAI. Try this process by building and running the provided sklearn_sample.Dockerfile
, to confirm your setup works. Do this with the following commands (once Docker is installed)
# Build
docker build -t sk_smoke -f ./decoder_demos/sklearn_sample.Dockerfile .
bash test_docker_local.sh --docker-name sk_smoke
For an example Dockerfile with annotations regarding the necessity and function of each line, see decoder_demos/template.Dockerfile
.
EvalAI Submission
Please ensure that your submission runs locally before running remote evaluation. You can run the previously listed commands with your own Dockerfile (in place of sk_smoke). This should produce a log of nontrivial metrics (evaluation is run on locally available minival).
To submit to the FALCON benchmark once your decoder Docker container is ready, follow the instructions on the EvalAI submission tab. This will instruct you to first install EvalAI, then add your token, and finally push the submission. It should look something like:
evalai push mysubmission:latest --phase --phase few-shot-<test/minival>-2319 --private
(Note that you will not see these instruction unless you have first created a team to submit. The phase should contain a specific challenge identifier. You may need to refresh the page before instructions will appear.)
Please note that all submissions are subject to a 6 hour time limit.
Troubleshooting
Docker:
- If this is your first time with docker, note that
sudo
access is needed, or your user needs to be in thedocker
group.docker info
should run without error. - While
sudo
is sufficient for local development, the EvalAI submission step will ultimately require your user to be able to rundocker
commands withoutsudo
. - To do this, add yourself to the
docker
group. Note you may need vigr to add your own user.
EvalAI:
pip install evalai
may fail on python 3.11, see: https://github.com/aio-libs/aiohttp/issues/6600. We recommend creating a separate env for submission in this case.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file falcon_challenge-1.0.0.tar.gz
.
File metadata
- Download URL: falcon_challenge-1.0.0.tar.gz
- Upload date:
- Size: 62.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ac8017c43db6abaee3116a4f0c37ef379487b521b21221e16a0133c0f664fdbc |
|
MD5 | 2790cdd9fc25968a9938045ba3590462 |
|
BLAKE2b-256 | 80a23213624f1287b715527cf2fde6d76a4dd9c6d1f5917ef862500f256e360f |
Provenance
The following attestation bundles were made for falcon_challenge-1.0.0.tar.gz
:
Publisher:
python-publish.yml
on snel-repo/falcon-challenge
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
falcon_challenge-1.0.0.tar.gz
- Subject digest:
ac8017c43db6abaee3116a4f0c37ef379487b521b21221e16a0133c0f664fdbc
- Sigstore transparency entry: 148179592
- Sigstore integration time:
- Predicate type:
File details
Details for the file falcon_challenge-1.0.0-py3-none-any.whl
.
File metadata
- Download URL: falcon_challenge-1.0.0-py3-none-any.whl
- Upload date:
- Size: 72.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | de6376e34b149e189c408abf6f9380a4b3b00939dcc6e7be6410d7f7954646fe |
|
MD5 | 2555aae6f3c4b567090c1e69b0aa0174 |
|
BLAKE2b-256 | 2168a9804d28053d81b6d4d7b9cb0d530262dae9b4d67dc586720aacf28ac89d |
Provenance
The following attestation bundles were made for falcon_challenge-1.0.0-py3-none-any.whl
:
Publisher:
python-publish.yml
on snel-repo/falcon-challenge
-
Statement type:
https://in-toto.io/Statement/v1
- Predicate type:
https://docs.pypi.org/attestations/publish/v1
- Subject name:
falcon_challenge-1.0.0-py3-none-any.whl
- Subject digest:
de6376e34b149e189c408abf6f9380a4b3b00939dcc6e7be6410d7f7954646fe
- Sigstore transparency entry: 148179593
- Sigstore integration time:
- Predicate type: