Evaluation code for vision tasks.
Project description
Introduction
This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, and image caption.
Functionalities
This repo currently offers evaluation metrics for three vision tasks:
- Image classification:
TopKAccuracyEvaluator
: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.ThresholdAccuracyEvaluator
: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.AveragePrecisionEvaluator
: computes the average precision, i.e., precision averaged across different confidence thresholds.PrecisionEvaluator
: computes precisionRecallEvaluator
: computes recallF1ScoreEvaluator
: computes f1-score (recall and precision will be reported as well)EceLossEvaluator
: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
- Object detection:
CocoMeanAveragePrecisionEvaluator
: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).
- Image caption:
BleuScoreEvaluator
: computes the Bleu score. For more details, refer to BLEU: a Method for Automatic Evaluation of Machine Translation.METEORScoreEvaluator
: computes the Meteor score. For more details, refer to Project page. We use the latest version (1.5) of the Code.ROUGELScoreEvaluator
: computes the Rouge-L score. Refer to ROUGE: A Package for Automatic Evaluation of Summaries for more details.CIDErScoreEvaluator
: computes the CIDEr score. Refer to CIDEr: Consensus-based Image Description Evaluation for more details.SPICEScoreEvaluator
: computes the SPICE score. Refer to SPICE: Semantic Propositional Image Caption Evaluation for more details.
While different machine learning problems/applications prefer different metrics, below are some general recommendations:
- Multiclass classification: Top-1 accuracy and Top-5 accuracy
- Multilabel classification: Average precision, Precision/recall/precision@k/threshold, where k and threshold can be very problem-specific
- Object detection: mAP@IoU=30 and mAP@IoU=50
- Image caption: Bleu, METEOR, ROUGE-L, CIDEr, SPICE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
vision-evaluation-0.2.2.tar.gz
(676.2 kB
view hashes)
Built Distribution
Close
Hashes for vision_evaluation-0.2.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 44448db2509370236b719588820d9ccd068ae8d3c0658efe3e5906b58f455c8e |
|
MD5 | 42be76695f6efca418f40beb1a604ed9 |
|
BLAKE2b-256 | fa88c657c428fd0e0a713de9fa1210a3af0d973545343d2882f2e68d771be29f |