Evaluation Framework for DAVIS Interactive Segmentation
Project description
DAVIS Interactive Evaluation Framework
This is a framework to evaluate interactive segmentation models over the DAVIS 2017 dataset. The code aims to provide an easy-to-use interface to test and validate interactive segmentation models.
This tool is also used to evaluate the Interactive Track of the DAVIS Challenges on Video Object Segmentation. More information about the latest challenge edition in the DAVIS website.
You can find an example of how to use the package in the following repository:
DAVIS Scribbles
In the classical DAVIS Semi-supervised Challenge track, the task is to segment an object in a semi-supervised manner, i.e. the given input is the ground truth mask of the first frame. In the DAVIS Interactive Challenge, in contrast, the user input is scribbles, which can be drawn much faster by humans and thus are a more realistic type of input.
The interactive annotation and segmentation consist in an iterative loop which is evaluated as follows:
- In the first interaction, a human-annotated scribble for each object in the video sequence is provided to the segmentation model. As a result, the model has to predict a segmentation mask containing all the objects for all the frames.
Note: all the scribbles are annotated in a single frame, but this does not have to be the first frame in the sequence, as the annotators were instructed to annotate the most relevant and meaningful frame. This is in contrast to the semi-supervised track, where - only and strictly - the first frame is annotated. - Then, the predicted masks are submitted to a server that returns human-simulated scribbles. These scribbles are always annotated in a single frame. The frame is selected as the one with the worst evaluation result among a list of frames specified by the user. By default, this list contains all the frames in the sequence.
- During the following steps, the segmentation model keeps iterating between predicting the masks using the new scribbles and submitting the masks to obatain new scribbles.
Evaluation: The evaluation metric is the mean of the Region similarity $\mathcal{J}$ and the Contour Accuracy $\mathcal{F}$. More information of the metrics here. The evaluation for the train
and val
subsets can be done offline at any time, whereas the evaluation for the test-dev
has to be done against a server that is only available during the challanges period.
More information: Please check the Installation guide to install the package and dowload the scribbles. Moreover, refer to the Usage guide to learn how to interface your code with the server.
Contributions: If you would like to add new features to the package, please do not hesitate to send a pull request.
Citation
Please cite both papers in your publications if DAVIS or this code helps your research.
@article{Caelles_arXiv_2018,
author = {Sergi Caelles and Alberto Montes and Kevis-Kokitsi Maninis and Yuhua Chen and Luc {Van Gool} and Federico Perazzi and Jordi Pont-Tuset},
title = {The 2018 DAVIS Challenge on Video Object Segmentation},
journal = {arXiv:1803.00557},
year = {2018}
}
@article{Pont-Tuset_arXiv_2017,
author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
title = {The 2017 DAVIS Challenge on Video Object Segmentation},
journal = {arXiv:1704.00675},
year = {2017}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file davisinteractive-1.0.4.tar.gz
.
File metadata
- Download URL: davisinteractive-1.0.4.tar.gz
- Upload date:
- Size: 151.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.7.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e6c71675bf3db12d5304198cb22c0ea56dcdbca1d99961b2273ce89ed12b81c9 |
|
MD5 | 78462b698ba21dedfd0bedf1d89ab0b7 |
|
BLAKE2b-256 | 2f7517623295368f8bbb5cf9a55888f41dca9c0280304bf4bf96289410d874e9 |