project_description
Project description
simple-cocotools
A simple, modern alternative to pycocotools
.
About
Why not just use Pycocotools?
- Code is more readable and hackable.
- Metrics are more transparent and understandable.
- Evaluation is fast.
- Only dependencies are
numpy
andscipy
. Nocython
extensions. - Code is more modern (type annotations, linting, etc).
Install
From PyPI
pip install simple-cocotools
From Repo
pip install "simple-cocotools @ git+ssh://git@github.com/fkodom/simple-cocotools.git"
For Contributors
# Clone this repository
gh repo clone fkodom/simple-cocotools
cd simple-cocotools
# Install all dev dependencies (tests etc.)
pip install -e .[all]
# Setup pre-commit hooks
pre-commit install
Usage
Expects target annotations to have the same format as model predictions. (The format used by all torchvision
detection models.) You may already have code to convert annotations into this format, since it's required to train many detection models. If not, use 'AnnotationsToDetectionFormat' from this repo as an example for how to do that.
A minimal example:
from torchvision.detection.models import maskrcnn_resnet50_fpn
from simple_cocotools import CocoEvaluator
evaluator = CocoEvaluator()
model = maskrcnn_resnet50_fpn(pretrained=True).eval()
for images, targets in data_loader:
predictions = model(images)
evaluator.update(predictions, targets)
metrics = evaluator.summarize()
metrics
will be a dictionary with format:
{
"box": {
"mAP": 0.40,
"mAR": 0.41,
"class_AP": {
"cat": 0.39,
"dog": 0.42,
...
},
"class_AR": {
# Same as 'class_AP' above.
}
}
"mask": {
# Same as 'box' above.
}
}
For a more complete example, see scripts/mask_rcnn_example.py
.
Benchmarks
I benchmarked against several torchvision
detection models, which have mAP scores reported on the PyTorch website.
Using a default score threshold of 0.5:
Model | Backbone | box mAP (official) |
box mAP | box mAR | mask mAP (official) |
mask mAP | mask mAR |
---|---|---|---|---|---|---|---|
Mask R-CNN | ResNet50 | 37.9 | 36.9 | 43.2 | 34.6 | 34.1 | 40.0 |
Faster R-CNN | ResNet50 | 37.0 | 36.3 | 42.0 | - | - | - |
Faster R-CNN | MobileNetV3-Large | 32.8 | 39.9 | 35.0 | - | - | - |
Notice that the mAP for MobileNetV3-Large
is artificially high, since it has a much lower mAR at that score threshold. After tuning the score threshold, so that mAP and mAR are more balanced:
Model | Backbone | Threshold | box mAP | box mAR | mask mAP | mask mAR |
---|---|---|---|---|---|---|
Mask R-CNN | ResNet50 | 0.6 | 41.1 | 41.3 | 38.2 | 38.5 |
Faster R-CNN | ResNet50 | 0.6 | 40.8 | 40.4 | - | - |
Faster R-CNN | MobileNetV3-Large | 0.425 | 36.2 | 36.2 | - | - |
These scores are more reflective of model performance, in my opinion. Mask R-CNN slightly outperforms Faster R-CNN, and there is a noticeable (but not horrible) gap between ResNet50 and MobileNetV3 backbones. PyTorch docs don't mention what score thresholds were used for each model benchmark. ¯\(ツ)/¯
Ignoring the time spent getting predictions from the model, evaluation is very fast.
- Bbox: ~400 samples/second
- Bbox + mask: ~100 samples/second
- Using a Google Cloud
n1-standard-4
VM (4 vCPUs, 16 GB RAM).
Note: Speeds are dependent on the number of detections per image, and therefore dependent on the model and score threshold.
Keypoints Usage
Keypoint mAP and mAR normally use pre-computed "sigmas" to determin the "correctness" of each keypoint prediction. Unfortunately, those sigmas are tailored specifically for human pose (as in the COCO dataset), and not applicable to other keypoint datasets.
NOTE: Sigmas are actually computed using the predictions of a specific model trained on COCO. To make this applicable to other datasets, you would need to train a model on that dataset, and then use the sigmas from that model. The logic is somewhat circular -- you need to train a model to get the sigmas, but you need the sigmas to compute mAP / mAR.
There's no way around this, unless a large body of pretrained models are already available for the dataset you're using. For most real-world problems, that is not the case. So, the open-source mAP / mAR keypoints metrics are not generally extensible to other datasets.
simple-cocotools
does not use sigmas, and instead computes the average distance between each keypoint prediction and ground truth. This is a much simpler approach, and is more applicable to other datasets. It's roughly how the sigmas for COCO were originally computed. The downside is that it's not directly comparable to the official COCO keypoints mAP / mAR.
Some keypoints are more ambiguous than others. For example, "left hip" is much more ambiguous than "left eye" -- the exact location of "left eye" should be obvious, while "left hip" is hidden by the torso and clothing. The average distance for "left hip" will be much larger than "left eye", even if the predictions are correct. (This is how sigmas were used in the official COCO keypoints mAP / mAR.) For that reason, keypoint distances should be interpreted with some knowledge about the specific dataset at hand.
metrics
will be a dictionary with format:
{
"box": {
"mAP": 0.40,
"mAR": 0.41,
"class_AP": {
"cat": 0.39,
"dog": 0.42,
...
},
"class_AR": {
# Same as 'class_AP' above.
}
}
"keypoints": {
"distance": 0.10,
"class_distance": {
"cat": {
"distance": 0.11,
"keypoint_distance": {
"left_eye": 0.12,
"right_eye": 0.13,
...
}
},
...
}
}
}
How It Works
TODO: Blog post on how simple-cocotools
works.
- Match the predictions/labels together, maximizing the IoU between pairs with the same object class. SciPy's
linear_sum_assignment
method does most of the heavy lifting here. - For each IoU threshold, determine the number of "correct" predictions from the assignments above. Pairs with IoU < threshold are incorrect.
- For each image, count the number of total predictions, correct predictions, and ground truth labels for each object class and IoU threshold.
- Compute AP/AR for each class from the prediction counts above. Then compute mAP and mAR by averaging over all object classes.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file simple-cocotools-0.2.1.tar.gz
.
File metadata
- Download URL: simple-cocotools-0.2.1.tar.gz
- Upload date:
- Size: 14.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a4e8fe5f11d832f6f7e0d53b1cdabc450d5a546ed35a4963c800b5bb53272141 |
|
MD5 | 236cfdfb3425aab8ffc1c84902f2bf63 |
|
BLAKE2b-256 | e076f7611708ca4e90146930525987d65ef926cfc97aed796562e834f26ca0c0 |
File details
Details for the file simple_cocotools-0.2.1-py3-none-any.whl
.
File metadata
- Download URL: simple_cocotools-0.2.1-py3-none-any.whl
- Upload date:
- Size: 11.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a841f26e1ac107dee7d24e3d269234592177f2e5dcec73681b35d8cff2b77449 |
|
MD5 | c2388a474909ce39ca9e53ff81d91509 |
|
BLAKE2b-256 | 11deb5e13a213bf31a43306a464cd78fb0b937dc7a70da0ac5cb42c9b0e94cda |