Skip to main content

Evaluation code for vision tasks.

Project description

Vision Evaluation

Introduction

This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, and image caption.

If you only need the image classification or object detection evaluation pipeline, JRE is not required. This repo

  • contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification and object detection.
  • defines the contract for metric calculation code in Evaluator class, for bringing custom evaluators under the same interface

This repo isn't trying to re-invent the wheel, but to provide centralized defaults for most metrics across different vision tasks so dev/research teams can compare model performance on the same page. As expected, you can find many implementations backed up by well-known sklearn or pycocotools.

Functionalities

This repo currently offers evaluation metrics for three vision tasks:

  • Image classification:
    • TopKAccuracyEvaluator: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.
    • ThresholdAccuracyEvaluator: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.
    • AveragePrecisionEvaluator: computes the average precision, i.e., precision averaged across different confidence thresholds.
    • PrecisionEvaluator: computes precision.
    • RecallEvaluator: computes recall.
    • RocAucEvaluator: computes Area under the Receiver Operating Characteristic Curve.
    • F1ScoreEvaluator: computes f1-score (recall and precision will be reported as well).
    • EceLossEvaluator: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
  • Object detection:
    • CocoMeanAveragePrecisionEvaluator: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).
  • Image caption:

While different machine learning problems/applications prefer different metrics, below are some general recommendations:

  • Multiclass classification: Top-1 accuracy and Top-5 accuracy
  • Multilabel classification: Average precision, Precision/recall/precision@k/threshold, where k and threshold can be very problem-specific
  • Object detection: mAP@IoU=30 and mAP@IoU=50
  • Image caption: Bleu, METEOR, ROUGE-L, CIDEr, SPICE

Additional Requirements

The image caption evaluators requires Jave Runtime Environment (JRE) (Java 1.8.0). This is not required for other evaluators.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision-evaluation-0.2.3.tar.gz (677.5 kB view hashes)

Uploaded Source

Built Distribution

vision_evaluation-0.2.3-py3-none-any.whl (15.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page