Skip to main content

Evaluation metric codes for various vision tasks.

Project description

visionmetrics

This repo contains evaluation metrics for vision tasks such as classification, object detection, image caption, and image matting. It uses torchmetrics as a base library and extends it to support custom vision tasks as necessary.

Available Metrics

Image Classification:

  • Accuracy: computes the top-k accuracy for a classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.
  • PrecisionEvaluator: computes precision.
  • RecallEvaluator: computes recall.
  • AveragePrecisionEvaluator: computes the average precision, i.e., precision averaged across different confidence thresholds.
  • AUCROC: computes Area under the Receiver Operating Characteristic Curve.
  • F1Score: computes f1-score.
  • CalibrationLoss**: computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.
  • ConfusionMatrix: computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix).
  • ExactMatch: computes the exact match score, i.e., the percentage of samples where the predicted label is exactly the same as the ground truth label.

The above metrics are available for Binary, Multiclass, and Multilabel classification tasks. For example, BinaryAccuracy is the binary version of Accuracy and MultilabelAccuracy is the multilabel version of Accuracy. Please refer to the example usage below for more details.

** The CalibrationLoss metric is only for binary and multiclass classification tasks.

Object Detection:

  • MeanAveragePrecision: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).
  • ClassAgnosticAveragePrecision: Coco mean average prevision (mAP) calculated in a class-agnostic manner. Considers all classes as one class.
  • DetectionConfusionMatrix: Similar to classification confusion matrix, but for object detection tasks.

Image Caption:

Image Matting:

  • MeanIOU: computes the mean intersection-over-union score.
  • ForegroundIOU: computes the foreground intersection-over-union evaluator score.
  • BoundaryMeanIOU: computes the boundary mean intersection-over-union score.
  • BoundaryForegroundIOU: computes the boundary foreground intersection-over-union score.
  • L1Error: computes the L1 error.

Regression:

  • MeanSquaredError: computes the mean squared error.
  • MeanAbsoluteError: computes the mean absolute error.

Retrieval:

  • RetrievalRecall: computes Recall@k, which is the percentage of relevant items in top-k among all relevant items
  • RetrievalPrecision: computes Precision@k, which is the percentage of TP among all items classified as P in top-k.
  • RetrievalMAP: computes Mean Average Precision@k, an information retrieval metric.
  • RetrievalPrecisionRecallCurveNPoints: computes a Precision-Recall Curve, interpolated at k points and averaged over all samples.

Grounding

  • Recall: computes Recall@k, which is the percentage of correct grounding in top-k among all relevant items.

Example Usage

import torch
from visionmetrics.classification import MulticlassAccuracy

preds = torch.rand(10, 10)
target = torch.randint(0, 10, (10,))

# Initialize metric
metric = MulticlassAccuracy(num_classes=10, top_k=1, average='macro')

# Add batch of predictions and targets
metric.update(preds, target)

# Compute metric
result = metric.compute()

Implementing Custom Metrics

Please refer to torchmetrics for more details on how to implement custom metrics.

Additional Requirements

The image caption metric calculation requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with pip install visionmetrics[caption]. This is not required for other evaluators. If you do not need image caption metrics, JRE is not required.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

visionmetrics-0.0.16.tar.gz (27.6 kB view details)

Uploaded Source

Built Distribution

visionmetrics-0.0.16-py3-none-any.whl (28.8 kB view details)

Uploaded Python 3

File details

Details for the file visionmetrics-0.0.16.tar.gz.

File metadata

  • Download URL: visionmetrics-0.0.16.tar.gz
  • Upload date:
  • Size: 27.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for visionmetrics-0.0.16.tar.gz
Algorithm Hash digest
SHA256 db4b250b46dd31c44b54c7097209f87f32e6627e6f5667353ff3d2c6991ecc84
MD5 9598e389881389947f8de859fa473981
BLAKE2b-256 13a79517c548dd33e392a0e46183b969ae4b5fd816de9f9585eadbb8882f4f35

See more details on using hashes here.

File details

Details for the file visionmetrics-0.0.16-py3-none-any.whl.

File metadata

File hashes

Hashes for visionmetrics-0.0.16-py3-none-any.whl
Algorithm Hash digest
SHA256 2f03df2d58462f9468d3db86dc947a84399cd9e2f09f5e27ffe638e7b1fec7bc
MD5 4f4f98e3a239741a1d88a6c3fc6b2bc7
BLAKE2b-256 59a566cc25576dfa3e47df6a37ee7141617d8321e2350e353c9e8c2c8bc77c13

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page