Skip to main content

Evaluation metric codes for various vision tasks.

Project description

Vision Evaluation

Introduction

This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification, object detection, image caption, and image matting.

If you only need the image classification or object detection evaluation pipeline, JRE is not required. This repo

  • contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification and object detection.
  • defines the contract for metric calculation code in Evaluator class, for bringing custom evaluators under the same interface

This repo isn't trying to re-invent the wheel, but to provide centralized defaults for most metrics across different vision tasks so dev/research teams can compare model performance on the same page. As expected, you can find many implementations backed up by well-known sklearn or pycocotools.

Functionalities

This repo currently offers evaluation metrics for three vision tasks:

  • Image classification:
    • TopKAccuracyEvaluator: computes the top-k accuracy for multi-class classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.
    • ThresholdAccuracyEvaluator: computes the threshold based accuracy (mainly for multi-label classification problem), i.e., accuracy of the predictions with confidence over a certain threshold.
    • AveragePrecisionEvaluator: computes the average precision, i.e., precision averaged across different confidence thresholds.
    • PrecisionEvaluator: computes precision.
    • RecallEvaluator: computes recall.
    • BalancedAccuracyScoreEvaluator: computes balanced accuracy, i.e., average recall across classes, for multiclass classification.
    • RocAucEvaluator: computes Area under the Receiver Operating Characteristic Curve.
    • F1ScoreEvaluator: computes f1-score (recall and precision will be reported as well).
    • EceLossEvaluator: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
    • ConfusionMatrixEvaluator computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix). .
  • Object detection:
    • CocoMeanAveragePrecisionEvaluator: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).
  • Image caption:
  • Image matting:
    • MeanIOUEvaluator: computes the mean intersection-over-union score.
    • ForegroundIOUEvaluator: computes the foreground intersection-over-union evaluator score.
    • BoundaryMeanIOUEvaluator: computes the boundary mean intersection-over-union score.
    • BoundaryForegroundIOUEvaluator: computes the boundary foreground intersection-over-union score.
    • L1ErrorEvaluator: computes the L1 error.
  • Image regression:
    • MeanLpErrorEvaluator: computes the mean Lp error (e.g. L1 error for p=1, L2 error for p=2, etc.).
  • Image retrieval:
    • RecallAtKEvaluator(k): computes Recall@k, which is the percentage of relevant items in top-k among all relevant items
    • PrecisionAtKEvaluator(k): computes Precision@k, which is the percentage of TP among all items classified as P in top-k.
    • MeanAveragePrecisionAtK(k): computes Mean Average Precision@k, an information retrieval metric.
    • PrecisionRecallCurveNPointsEvaluator(k): computes a Precision-Recall Curve, interpolated at k points and averaged over all samples.

While different machine learning problems/applications prefer different metrics, below are some general recommendations:

  • Multiclass classification: Top-1 Accuracy and Top-5 Accuracy
  • Multilabel classification: Average Precision, Precision/Recall/Precision@k/threshold, where k and threshold can be very problem-specific
  • Object detection: mAP@IoU=30 and mAP@IoU=50
  • Image caption: Bleu, METEOR, ROUGE-L, CIDEr, SPICE
  • Image matting: Mean IOU, Foreground IOU, Boundary mean IOU, Boundary Foreground IOU, L1 Error
  • Image regression: Mean L1 Error, Mean L2 Error
  • Image retrieval: Recall@k, Precision@k, Mean Average Precision@k, Precision-Recall Curve

Additional Requirements

The image caption evaluators requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with pip install vision-evaluation[caption]. This is not required for other evaluators.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision-evaluation-0.2.14.tar.gz (27.5 kB view details)

Uploaded Source

Built Distribution

vision_evaluation-0.2.14-py3-none-any.whl (28.4 kB view details)

Uploaded Python 3

File details

Details for the file vision-evaluation-0.2.14.tar.gz.

File metadata

  • Download URL: vision-evaluation-0.2.14.tar.gz
  • Upload date:
  • Size: 27.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for vision-evaluation-0.2.14.tar.gz
Algorithm Hash digest
SHA256 7cd172e677451b62f2d3696ffa89246f62dd6afecbfc0d9f57a53732cb17f9b4
MD5 0bdb90dfed8eb7ac59292f5affabbc02
BLAKE2b-256 7d14e6c25b55e8c84faa8be0a023d90b37f69d1af621c13145ab991025c77a77

See more details on using hashes here.

File details

Details for the file vision_evaluation-0.2.14-py3-none-any.whl.

File metadata

File hashes

Hashes for vision_evaluation-0.2.14-py3-none-any.whl
Algorithm Hash digest
SHA256 635705762b6a0a117c89ebff3d75c91e3388e30b6abaac77648039685d8fa57e
MD5 3d034028fd725030a2d1783b7ffc3bde
BLAKE2b-256 800f3186eec7a92b3516e9e75baee94e4c16a3c2d93c63819d10d44285179148

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page