Skip to main content

Evaluation Framework for DAVIS Interactive Segmentation

Project description

Travis Codecov branch GPLv3 license

This is a framework to evaluate interactive segmentation models over the DAVIS dataset. The code aims to provide an easy-to-use interface to test and validate interactive segmentation models.

This is the tool that will be used to evaluate the DAVIS Challenge on Video Object Segmentation 2018 on the interactive track. More info about the challenge on the website.

Note: code still under development.

DAVIS Scribbles

On previous DAVIS Challenge the task consisted on object segmentation in a semisupervised manner. The input given was the ground truth mask of the first frame. For DAVIS interactive challenge we change the annotation to scribbles which can be annotated faster by humans.

The interactive annotation and segmentation consist on a iterative loop which is going to be evaluated as follows:

  • On the first iteration, a human annotated scribble will be provided to the segmentation model. All the scribbles are annotated over the DAVIS dataset and the objects annotated will be the same as the ground truth masks. Note: the annotated frame can be any of the sequence as the humans where asked to annotate the frames that found most relevant and meaningfull to annotate.

  • During the rest of the iterations, once the predicted masks have been submitted, an automated scribble is generated simulating human annotation. The new annotation will be performed on a single frame and this frame will be chosen as the worst on the evaluation metric.

Evaluation: For now, the evaluation metric will be the Jaccard similarity \(\mathcal{J}\).

Citation

Please cite both papers in your publications if DAVIS or this code helps your research.

@article{Caelles_arXiv_2018,
  author = {Sergi Caelles and Alberto Montes and Kevis-Kokitsi Maninis and Yuhua Chen and Luc {Van Gool} and Federico Perazzi and Jordi Pont-Tuset},
  title = {The 2018 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1803.00557},
  year = {2018}
}
@inproceedings{Perazzi2016,
  author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
  title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
  booktitle = {Computer Vision and Pattern Recognition},
  year = {2016}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

davisinteractive-0.0.1.dev2.tar.gz (3.8 kB view hashes)

Uploaded Source

Built Distribution

davisinteractive-0.0.1.dev2-py3-none-any.whl (5.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page