Evaluation metric codes for various vision tasks.
Project description
visionmetrics
This repo contains evaluation metrics for vision tasks such as classification, object detection, image caption, and image matting. It uses torchmetrics as a base library and extends it to support custom vision tasks as necessary.
Available Metrics
Image Classification:
Accuracy
: Computes the top-k accuracy for a classification problem. A prediction is considered correct, if the ground truth label is within the labels with top k confidences.PrecisionEvaluator
: Computes precision.RecallEvaluator
: Computes recall.AveragePrecisionEvaluator
: Computes the average precision, i.e., precision averaged across different confidence thresholds.AUCROC
: Computes Area under the Receiver Operating Characteristic Curve.F1Score
: Computes f1-score.CalibrationLoss
**: Computes the ECE loss, i.e., the expected calibration error, given the model confidence and true labels for a set of data points.ConfusionMatrix
: Computes the confusion matrix of a classification. By definition a confusion matrix C is such that Cij is equal to the number of observations known to be in group i and predicted to be in group j (https://en.wikipedia.org/wiki/Confusion_matrix).ExactMatch
: Computes the exact match score, i.e., the percentage of samples where the predicted label is exactly the same as the ground truth label.MultilabelF1ScoreWithDuplicates
: Computes a variant of the MultilabelF1Score to perform evaluation over lists of predictions that may contain duplicates, where the number of each value is also factored into the score and contributes to true positives, false positives, and false negatives. Returns micro precision, recall, and F1 in a dictionary.
The above metrics are available for Binary, Multiclass, and Multilabel classification tasks. For example, BinaryAccuracy
is the binary version of Accuracy
and MultilabelAccuracy
is the multilabel version of Accuracy
. Please refer to the example usage below for more details.
** The CalibrationLoss
metric is only for binary and multiclass classification tasks.
Object Detection:
MeanAveragePrecision
: Coco mean average precision (mAP) computation across different classes, under multiple IoU(s).ClassAgnosticAveragePrecision
: Coco mean average prevision (mAP) calculated in a class-agnostic manner. Considers all classes as one class.DetectionConfusionMatrix
: Similar to classification confusion matrix, but for object detection tasks.DetectionMicroPrecisionRecallF1
: Computes the micro precision, recall, and F1 scores based on the true positive, false positive, and false negative values computed byDetectionConfusionMatrix
. Returns the three values in a dictionary.
Image Caption:
BleuScore
: Computes the Bleu score. For more details, refer to BLEU: a Method for Automatic Evaluation of Machine Translation.METEORScore
: Computes the Meteor score. For more details, refer to Project page. We use the latest version (1.5) of the Code.ROUGELScore
: Computes the Rouge-L score. Refer to ROUGE: A Package for Automatic Evaluation of Summaries for more details.CIDErScore
: Computes the CIDEr score. Refer to CIDEr: Consensus-based Image Description Evaluation for more details.SPICEScore
: Computes the SPICE score. Refer to SPICE: Semantic Propositional Image Caption Evaluation for more details.AzureOpenAITextModelCategoricalScore
: Computes micro precision, recall, F1, and accuracy scores, and an average model score, based on scores generated from a specified prompt to an Azure OpenAI model. Returns the results in a dictionary.
Image Matting:
MeanIOU
: Computes the mean intersection-over-union score.ForegroundIOU
: Computes the foreground intersection-over-union evaluator score.BoundaryMeanIOU
: Computes the boundary mean intersection-over-union score.BoundaryForegroundIOU
: Computes the boundary foreground intersection-over-union score.L1Error
: Computes the L1 error.
Regression:
MeanSquaredError
: Computes the mean squared error.MeanAbsoluteError
: Computes the mean absolute error.MeanAbsoluteErrorF1Score
: Computes the micro precision, recall, and F1 scores based on the true positive, false positive, and false negative values determined by a provided error threshold. Returns the three values in a dictionary.
Retrieval:
RetrievalRecall
: Computes Recall@k, which is the percentage of relevant items in top-k among all relevant itemsRetrievalPrecision
: Computes Precision@k, which is the percentage of TP among all items classified as P in top-k.RetrievalMAP
: Computes Mean Average Precision@k, an information retrieval metric.RetrievalPrecisionRecallCurveNPoints
: Computes a Precision-Recall Curve, interpolated at k points and averaged over all samples.
Grounding:
Recall
: Computes Recall@k, which is the percentage of correct grounding in top-k among all relevant items.
Key-Value Pair Extraction:
KeyValuePairExtractionScore
: Evaluates methods that perform arbitrary schema-based structured field extraction. Each schema is an adapted version of a JSON Schema-formatted dictionary that contains keys, each of which specifies the standard JSON Schema type of the key and a string description, whether to perform grounding on the key, classes for closed-vocabulary values, and additional information describing list items and object properties (sub-keys). Based on the properties defined in the JSON Schema, infers the best evaluation metric for each key's data type, and defaults to text-based evaluation for cases that have no clear definition. For each key, definitions of true positive, false positive, and false negative are inherited from the corresponding metric. In addition to metric-specific definitions, missing keys in predictions are counted as false negatives, and invalid keys in predictions are counted as false positives. Computes the key-wise metrics for each key in the schema and returns the overall micro F1, macro F1, and key-wise scores in their raw format for each key.
Example Usage
import torch
from visionmetrics.classification import MulticlassAccuracy
preds = torch.rand(10, 10)
target = torch.randint(0, 10, (10,))
# Initialize metric
metric = MulticlassAccuracy(num_classes=10, top_k=1, average='macro')
# Add batch of predictions and targets
metric.update(preds, target)
# Compute metric
result = metric.compute()
Implementing Custom Metrics
Please refer to torchmetrics for more details on how to implement custom metrics.
Additional Requirements
The image caption metric calculation requires Jave Runtime Environment (JRE) (Java 1.8.0) and some extra dependencies which can be installed with pip install visionmetrics[caption]
. This is not required for other evaluators. If you do not need image caption metrics, JRE is not required.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file visionmetrics-0.0.19.tar.gz
.
File metadata
- Download URL: visionmetrics-0.0.19.tar.gz
- Upload date:
- Size: 43.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2f1af7032d1d99f7976dba93bec8fdec5dc6d756dbdca77ea4a1c274b5db0f6d |
|
MD5 | cc7ecc14dcb0299383af11869b913e1e |
|
BLAKE2b-256 | a6bcc6d9afd4c094e277c5f29773d7d78d7757b844ea3776bd9890b4105ace81 |
File details
Details for the file visionmetrics-0.0.19-py3-none-any.whl
.
File metadata
- Download URL: visionmetrics-0.0.19-py3-none-any.whl
- Upload date:
- Size: 41.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.20
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | beef5050736c3dde19cd5d14e2e34cd92b25eb263382cd5e5b2e774275e7ef92 |
|
MD5 | b23f8686af19ff38a9625d1b8980c890 |
|
BLAKE2b-256 | 210d8b00ac189f073ed519b9d97e5cad957c973f176964166676106d48259f00 |