Skip to main content

No project description provided

Project description

FALCON Benchmark and Challenge

This package contains core code for submitting decoders to the FALCON challenge. For a more general overview of FALCON, please see the main website.

Installation

Install falcon_challenge with:

pip install falcon-challenge

To create Docker containers for submission, you must have Docker installed. See, e.g. https://docs.docker.com/desktop/install/linux-install/.

Getting started

Data downloading

The FALCON datasets are available on DANDI (H1, H2, M1, M2, B1). H1 and H2 are human intractorical brain-computer interface (iBCI) datasets, M1 and M2 are monkey iBCI datasets, and B1 is a songbird iBCI dataset. You can download them individually by going to their DANDI pages to find their respective DANDI download commands, or you can run ./download_falcon_datasets.sh from project root.

Data from each dataset is broken down as follows:

  • Held-in
    • Data from the first several recording sessions.
    • All non-evaluation data is released and split into calibration (large portion) and minival (small portion) sets.
    • Held-in calibration data is intended to train decoders from scratch.
    • Minival data enables validation of held-in decoders and submission debugging.
  • Held-out:
    • Data from the latter several recording sessions.
    • A small portion of non-evaluation data is released for calibration.
    • Held-out calibration data is intentionally small to discourage training decoders from scratch on this data and provides an opportunity for few-shot recalibration.

Some of the sample code expects your data directory to be set up in ./data. Specifically, the following hierarchy is expected:

data

  • h1
    • held_in_calib
    • held_out_calib
    • minival (Copy dandiset minival folder into this folder)
  • h2
    • held_in_calib
    • held_out_calib
    • minival (Copy dandiset minival folder into this folder)
  • m1
    • sub-MonkeyL-held-in-calib
    • sub-MonkeyL-held-out-calib
    • minival (Copy dandiset minival folder into this folder)
  • m2
    • held_in_calib
    • held_out_calib
    • minival (Copy dandiset minival folder into this folder)

Each of the lowest level dirs holds the data files (in Neurodata Without Borders (NWB) format). Data from some sessions is distributed across multiple NWB files. Some data from each file is allocated to calibration, minival, and evaluation splits as appropriate.

Code

This codebase contains starter code for implementing your own method for the FALCON challenge.

  • The falcon_challenge folder contains the logic for the evaluator. Submitted solutions must conform to the interface specified in falcon_challenge.interface. During reset, predict, and observe methods, your approach has access to a new timestep of neural observations. To access and make use of trial timing signals, implement the on_done method. Only within-trial data will be considered for evaluation, but you are welcome to use data from the entire available time period.
  • In data_demos, we provide notebooks that survey each dataset released as part of this challenge.
  • In decoder_demos, we provide sample decoders and baselines that are formatted to be ready for submission to the challenge. To use them, see the comments in the header of each file ending in _sample.py. Your solutions should look similar once implemented! (Namely, you should have a _decoder.py file or class which conforms to falcon_challenge.inferface as well as a _sample.py file that is the entry point for running your decoder.)

For example, you can prepare and evaluate a linear decoder by running:

python decoder_demos/sklearn_decoder.py --training_dir data/000954/sub-HumanPitt-held-in-calib/ --calibration_dir data/000954/sub-HumanPitt-held-out-calib/ --mode all --task h1
# Should report: CV fit score, 0.26

python decoder_demos/sklearn_sample.py --evaluation local --phase minival --split h1
# Should report: Held In Mean of 0.195

Note: During evaluation, data file names are hashed into unique tags. Submitted solutions receive data to decode along with tags indicating the file from which the data originates in the call to their reset function. These tags are the keys of the the DATASET_HELDINOUT_MAP dictionary in falcon_challenge/evaluator.py. Submissions that intend to condition decoding on the data file from which the data comes should make use of these tags. For an example, see fit_many_decoders and reset in decoder_demos/sklearn_decoder.py.

Docker Submission

To interface with our challenge, your code will need to be packaged in a Docker container that is submitted to EvalAI. Try this process by building and running the provided sklearn_sample.Dockerfile, to confirm your setup works. Do this with the following commands (once Docker is installed)

# Build
docker build -t sk_smoke -f ./decoder_demos/sklearn_sample.Dockerfile .
bash test_docker_local.sh --docker-name sk_smoke

For an example Dockerfile with annotations regarding the necessity and function of each line, see decoder_demos/template.Dockerfile.

EvalAI Submission

Please ensure that your submission runs locally before running remote evaluation. You can run the previously listed commands with your own Dockerfile (in place of sk_smoke). This should produce a log of nontrivial metrics (evaluation is run on locally available minival).

To submit to the FALCON benchmark once your decoder Docker container is ready, follow the instructions on the EvalAI submission tab. This will instruct you to first install EvalAI, then add your token, and finally push the submission. It should look something like:

evalai push mysubmission:latest --phase few-shot-<test/minival>-2319 --private

(Note that you will not see these instruction unless you have first created a team to submit. The phase should contain a specific challenge identifier. You may need to refresh the page before instructions will appear.)

Please note that all submissions are subject to a 6 hour time limit.

Troubleshooting

Docker:

  • If this is your first time with docker, note that sudo access is needed, or your user needs to be in the docker group. docker info should run without error.
  • While sudo is sufficient for local development, the EvalAI submission step will ultimately require your user to be able to run docker commands without sudo.
  • To do this, add yourself to the docker group. Note you may need vigr to add your own user.

EvalAI:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

falcon_challenge-1.0.2.tar.gz (63.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

falcon_challenge-1.0.2-py3-none-any.whl (72.8 kB view details)

Uploaded Python 3

File details

Details for the file falcon_challenge-1.0.2.tar.gz.

File metadata

  • Download URL: falcon_challenge-1.0.2.tar.gz
  • Upload date:
  • Size: 63.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for falcon_challenge-1.0.2.tar.gz
Algorithm Hash digest
SHA256 d93c9b634b8fd92efb6207a2412d12952a98f6cc6358afc2f77d27773e501dfc
MD5 becd99f2626526e1124f5cd538330b50
BLAKE2b-256 e566a8b1d0a3d93746c7af76d414ef211acb69ed033a86a4d780fbf14ade42ff

See more details on using hashes here.

Provenance

The following attestation bundles were made for falcon_challenge-1.0.2.tar.gz:

Publisher: python-publish.yml on snel-repo/falcon-challenge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file falcon_challenge-1.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for falcon_challenge-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5cf0845455adfe874d4de7f2a652f18ea35a8869a3124834a9890b180b9484f4
MD5 798e81b5e5cbf9e3793ebfa413563a19
BLAKE2b-256 da6492ba928d1aa418df24ab3e8397140f6512531be3368c19c1a78615449df2

See more details on using hashes here.

Provenance

The following attestation bundles were made for falcon_challenge-1.0.2-py3-none-any.whl:

Publisher: python-publish.yml on snel-repo/falcon-challenge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page