Skip to main content

No project description provided

Project description

FALCON Benchmark and Challenge

This package contains core code for submitting decoders to the FALCON challenge. Full github contains additional examples and documentation.

Installation

Install falcon_challenge with:

pip install falcon-challenge

To create Docker containers for submission, you must have Docker installed. See, e.g. https://docs.docker.com/desktop/install/linux-install/.

Getting started

Data downloading

The FALCON datasets are available on DANDI (or through private correspondence, if beta-testing).

NOTE FOR BETA TESTERS:

  • Some of the sample code expects your data directory to be set up in ./data. Specifically, the following hierarchy is expected:

data

  • h1
    • held_in_calib
    • held_out_calib
    • minival
    • eval (Note this is private data)
  • m1
    • sub-MonkeyL-held-in-calib
    • sub-MonkeyL-held-out-calib
    • minival (Copy dandiset minival folder into this folder)
    • eval (Copy the ground truth held in and held out data into this folder)

H1 should unfold correctly just from unzipping the provided directory. M1 should work by renaming the provided dandiset to m1 and minival folder inside, and then copying the provided eval data into this folder. Each of the lowest level dirs holds the NWB files.

Code

This codebase contains starter code for implementing your own method for the FALCON challenge.

  • The falcon_challenge folder contains the logic for the evaluator. Submitted solutions must confirm to the interface specified in falcon_challenge.interface.
  • In data_demos, we provide notebooks that survey each dataset released as part of this challenge.
  • In decoder_demos, we provide sample decoders and baselines that are formatted to be ready for submission to the challenge. To use them, see the comments in the header of each file ending in _sample.py. Your solutions should look similar once implemented!

For example, you can prepare and evaluate a linear decoder by running:

python decoder_demos/sklearn_decoder.py --training_dir data/h1/held_in_calib/ --calibration_dir data/h1/held_out_calib/ --mode all --task h1
python sklearn_sample.py --evaluation local --phase minival --split h1

Docker Submission

To interface with our challenge, your code will need to be packaged in a Docker container that is submitted to EvalAI. Try this process by building and running the provided sklearn_sample.Dockerfile, to confirm your setup works. Do this with the following commands (once Docker is installed)

# Build
sudo docker build -t sk_smoke -f ./decoder_demos/sklearn_sample.Dockerfile .
sudo docker run -v PATH_TO_YOUR_DATA_DIR:/dataset/evaluation_data -it sk_smoke
## If your solution needs GPUs, append a --gpus all flag to `docker run`

EvalAI Submission (under construction)

To submit to the FALCON benchmark once your decoder Docker container is ready, follow the instructions on the EvalAI submission tab. It should look something like: evalai push decoder_container:latest --phase <phase-name>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

falcon_challenge-0.2.1.tar.gz (20.7 kB view hashes)

Uploaded Source

Built Distribution

falcon_challenge-0.2.1-py3-none-any.whl (24.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page