Evaluation of deep learning models
Project description
English | 简体中文
Introduction
MMEval is a machine learning evaluation library that supports efficient and accurate distributed evaluation on a variety of machine learning frameworks.
Major features:
- Comprehensive metrics for various computer vision tasks (NLP will be covered soon!)
- Efficient and accurate distributed evaluation, backed by multiple distributed communication backends
- Support multiple machine learning frameworks via dynamic input dispatching mechanism
Supported distributed communication backends
MPI4Py | torch.distributed | Horovod | paddle.distributed | oneflow.comm |
---|---|---|---|---|
MPI4PyDist | TorchCPUDist TorchCUDADist |
TFHorovodDist | PaddleDist | OneFlowDist |
Supported metrics and ML frameworks
NOTE: MMEval tested with PyTorch 1.6+, TensorFlow 2.4+, Paddle 2.2+ and OneFlow 0.8+.
Metric | numpy.ndarray | torch.Tensor | tensorflow.Tensor | paddle.Tensor | oneflow.Tensor |
---|---|---|---|---|---|
Accuracy | ✔ | ✔ | ✔ | ✔ | ✔ |
SingleLabelMetric | ✔ | ✔ | ✔ | ||
MultiLabelMetric | ✔ | ✔ | ✔ | ||
AveragePrecision | ✔ | ✔ | ✔ | ||
MeanIoU | ✔ | ✔ | ✔ | ✔ | ✔ |
VOCMeanAP | ✔ | ||||
OIDMeanAP | ✔ | ||||
COCODetection | ✔ | ||||
ProposalRecall | ✔ | ||||
F1Score | ✔ | ✔ | ✔ | ||
HmeanIoU | ✔ | ||||
PCKAccuracy | ✔ | ||||
MpiiPCKAccuracy | ✔ | ||||
JhmdbPCKAccuracy | ✔ | ||||
EndPointError | ✔ | ✔ | ✔ | ||
AVAMeanAP | ✔ | ||||
StructuralSimilarity | ✔ | ||||
SignalNoiseRatio | ✔ | ||||
PeakSignalNoiseRatio | ✔ | ||||
MeanAbsoluteError | ✔ | ||||
MeanSquaredError | ✔ |
Installation
MMEval
requires Python 3.6+ and can be installed via pip.
pip install mmeval
To install the dependencies required for all the metrics provided in MMEval
, you can install them with the following command.
pip install 'mmeval[all]'
Get Started
There are two ways to use MMEval
's metrics, using Accuracy
as an example:
from mmeval import Accuracy
import numpy as np
accuracy = Accuracy()
The first way is to directly call the instantiated Accuracy
object to calculate the metric.
labels = np.asarray([0, 1, 2, 3])
preds = np.asarray([0, 2, 1, 3])
accuracy(preds, labels)
# {'top1': 0.5}
The second way is to calculate the metric after accumulating data from multiple batches.
for i in range(10):
labels = np.random.randint(0, 4, size=(100, ))
predicts = np.random.randint(0, 4, size=(100, ))
accuracy.add(predicts, labels)
accuracy.compute()
# {'top1': ...}
Learn More
Examples
In the works
- Continue to add more metrics and expand more tasks (e.g. NLP, audio).
- Support more ML frameworks and explore multiple ML framework support paradigms.
Contributing
We appreciate all contributions to improve MMEval. Please refer to CONTRIBUTING.md for the contributing guideline.
License
This project is released under the Apache 2.0 license.
Projects in OpenMMLab
- MMEngine: OpenMMLab foundational library for training deep learning models.
- MIM: MIM installs OpenMMLab packages.
- MMCV: OpenMMLab foundational library for computer vision.
- MMClassification: OpenMMLab image classification toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
- MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMOCR: OpenMMLab text detection, recognition, and understanding toolbox.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
- MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
- MMRazor: OpenMMLab model compression toolbox and benchmark.
- MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMFlow: OpenMMLab optical flow toolbox and benchmark.
- MMEditing: OpenMMLab image and video editing toolbox.
- MMGeneration: OpenMMLab image and video generative models toolbox.
- MMDeploy: OpenMMLab model deployment framework.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file mmeval-0.2.1.tar.gz
.
File metadata
- Download URL: mmeval-0.2.1.tar.gz
- Upload date:
- Size: 137.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.7.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5fa8933b7a06a4507928cfe9e171232ccbc88768025609da5c462a9c982b44e2 |
|
MD5 | 7cac59aef28ce43940ec8c13470b881c |
|
BLAKE2b-256 | 735d3703e5eeae80f0a007aefe95f58cacce5fd50658fcbbad9af5258e3cb49c |
File details
Details for the file mmeval-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: mmeval-0.2.1-py3-none-any.whl
- Upload date:
- Size: 189.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.7.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d9b4bc08438ea91dc1859eed624697e362b12e9e8f0fb4a752f53d94b51be955 |
|
MD5 | 7940e911d756b74913b58f7010d12760 |
|
BLAKE2b-256 | 202a89546ea52c77efa5f7464d5e6e93fec8be2f64b8253a52c607d180e1c845 |