Skip to main content

Evaluation code for vision tasks.

Project description

Introduction

This repo contains evaluation metric codes used in Microsoft Cognitive Services Computer Vision for tasks such as classification and object detection.

Functionalities

This repo currently offers evaluation metrics for two vision tasks:

  • Image classification:
    • evaluators.TopKAccuracyEvaluator: computes the top-k accuracy, i.e., accuracy of the top k predictions with highest confidence.
    • evaluators.AveragePrecisionEvaluator: computes the average precision, precision averaged across different confidence thresholds.
    • evaluators.ThresholdAccuracyEvaluator: computes the threshold based accuracy, i.e., accuracy of the predictions with confidence over a certain threshold.
    • evaluators.EceLossEvaluator: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
  • Object detection:
    • evaluators.MeanAveragePrecisionEvaluatorForSingleIOU, evaluators.MeanAveragePrecisionEvaluatorForMultipleIOUs: computes the mean average precision (mAP), i.e. mean average precision across different classes, under single or multiple IoU(s).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision-evaluation-0.1.0.tar.gz (6.1 kB view hashes)

Uploaded Source

Built Distribution

vision_evaluation-0.1.0-py3-none-any.whl (6.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page