Holistic Evaluation of Audio Representations (HEAR) 2021 -- Evaluation Kit
Project description
hear-eval-kit
Evaluation kit for HEAR 2021 NeurIPS competition, using tasks from hear-preprocess.
hear-eval-kit
Downstream evaluation on each task involves two steps:
- computing audio embeddings
- learning a shallow fully-connected predictor
The first step's speed depends upon a variety of factors. The second step's speed is relatively similar between models.
If you have any questions or comments:
- File an issue.
- Post on the discussion board.
- [Email us](mailto:deep at neuralaudio dot ai).
Requirements
Tested with Python 3.7 and 3.8. Python 3.9 is not officially supported because pip3 installs are very finicky, but it might work.
We officially support Torch 1.9 and Tensorflor 2.6.0, as well as Tensorflow 2.4.2 using the hack described in the Dockerfile README. We use CUDA 11.2. Other versions are possible, please contact us.
We test on 16GB GCP GPUs.
Quickstart
Here is a simple quickstart to evaluate hearbaseline
using random
projections and a tiny subset of the open tasks. More detailed
instructions are below.
Installation
There are 3 ways to run heareval
:
- Locally, through pip3 install (or conda)
- Using Docker
- On the cloud
You are welcome to contact us if you have any questions or issues.
Local installation
pip3 install heareval
Docker
We have docker images containing the heareval
environment.
turian/heareval:stable
contains the latest stable image with all
dependencies bundled in.
Cloud GPUs
The easiest way to do evaluation is to launch a Spotty GCP instance. You can easily adapt Spotty also for AWS GPU instances.
Prepare a spotty.yaml
file with the provided template file:
cp spotty.yaml.tmpl spotty.yaml
Change the instance name in the copied file. Specifically, change "USERNAME"
suffix in instances: name
to allow for multiple users in the same project
to make separate gcp instances and volumes to avoid conflicts within the project.
Run spotty:
spotty start
spotty sh
This requires the heareval Docker image, which is pre-built and published on Dockerhub for your convenience.
Please refer to README.spotty
for more details.
Download Open Tasks
If you are on GCP cloud, you can freely download open tasks as follows:
gsutil -m cp gs://hear2021/open-tasks/hear-2021.0.3-*-{SAMPLE_RATE}.gz . && for f in hear-*.gz; do tar zxf "$f"; done
where SAMPLE_RATE
in {16000, 20050, 32000, 44100, 48000}
is
the sample rate your model desires.
If you are downloading from HTTPS, please only download open tasks once and mirror them internally, because cloud downloads are expensive for us. We are looking for longer-term hosting options.
Download:
https://storage.googleapis.com/hear2021/open-tasks/hear-2021.0.3-{TASK}-{SAMPLE_RATE}.tar.gz
for the following tasks:
dcase2016_task2-hear2021-full
nsynth_pitch-v2.2.3-5h
nsynth_pitch-v2.2.3-50h
speech_commands-v0.0.2-5h
speech_commands-v0.0.2-full
where SAMPLE_RATE
in {16000, 20050, 32000, 44100, 48000}
is the
sample rate your model desires.
Untar all the files.
Compute embeddings
time python3 -m heareval.embeddings.runner MODULE_NAME --model WEIGHTS_FILE --tasks-dir hear-2021.0.3/tasks/
where MODULE_NAME
is your embedding model name.
This will create directories embeddings/MODULE_NAME/TASK/
with
your embeddings. If you run the above command multiple times, it
will skip tasks it has already performed embedding on. You can
delete directories if you want to recompute embeddings.
There is an advanced option --model-options
whereby you can pass
a JSON string of parameters to the model. This is useful for
experimenting with model hyperparameters. These options appear in
the embeddings output directory name, so you can run several different
model variations at once.
Evaluation over embeddings
You can then run final downstream evaluation on these embeddings as follows:
python3 -m heareval.predictions.runner embeddings/{MODULE_NAME}/*
This will run on a particular module, over all tasks, with determinism
and the default number of grid points. Embeddings will be loaded
into CPU memory, for speed of training.
Logs will be sent to stdout and concise logs will be in logs/
.
If you run this multiple times, it should be deterministic, but will
always start from scratch.
Ignore warnings about Leaking Caffe2 thread-pool after fork
, this
is a known torch bug.
More advanced flags allow different downstream training regimes
Final test scores are logged to stdout and also to
{EMBEDDINGS_DIR}/{MODULE_NAME}/{TASK_NAME}/test.predicted-scores.json
.
Note on Speed
Models with larger embeddings scale sub-linearly in training time (because of GPU optimizations) and linearly in hop-size (for event-based prediction tasks). The main hyperparameters controlling downstream training time are the maximum number of epochs and number of grid points for grid search.
Development
If you are developing this repo, clone repo:
git clone https://github.com/neuralaudio/hear-eval-kit
cd hear-eval-kit
Install in development mode:
pip3 install -e ".[dev]"
Make sure you have pre-commit hooks installed:
pre-commit install
Running tests:
python3 -m pytest
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file heareval-2021.0.3.tar.gz
.
File metadata
- Download URL: heareval-2021.0.3.tar.gz
- Upload date:
- Size: 31.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.7.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.0 CPython/3.9.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e21c73188ff73f831a43b5e117f365210985be32a5b52d0e210bb9d31995f8b7 |
|
MD5 | b4295080cabbc73d5dd22c743d8f1268 |
|
BLAKE2b-256 | 29fd4257d3fb28ab00898f67f7971d4af0342fd73a2236ef62de4d4d7d88a777 |
File details
Details for the file heareval-2021.0.3-py3-none-any.whl
.
File metadata
- Download URL: heareval-2021.0.3-py3-none-any.whl
- Upload date:
- Size: 32.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.7.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.0 CPython/3.9.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd576b07a1c802d188e6a41070a062f38babdf2181c09b52056250e5d7273af5 |
|
MD5 | 5eff655e04c2d1287eee94f2b9838749 |
|
BLAKE2b-256 | 20f5ba1959f0c31cfd0b3bda58460017cdba05b4731de65e4fcb1d31caf8cde6 |