Skip to main content

A small package that evaluates COCO detection results from OpenMMLab and Detectron(2).

Project description

COCO FROC analysis

FROC analysis for COCO annotations and Detectron(2) detection results. The COCO annotation style is defined here.


pip install coco-froc-analysis


A single annotation record in the ground-truth file might look like this:

  "area": 2120,
  "iscrowd": 0,
  "bbox": [111, 24, 53, 40],
  "category_id": 3,
  "ignore": 0,
  "segmentation": [],
  "image_id": 407,
  "id": 945

While the prediction (here for bounding box) given by the region detection framework is such:

  "image_id": 407,
  "category_id": 3,
  "score": 0.9990422129631042,
  "bbox": [

The FROC analysis counts the number of images, number of lesions in the ground truth file for all categories and then counts the lesion localization predictions and the non-lesion localization predictions. A lesion is localized by default if its center is inside any ground truth box and the categories match or if you wish to use IoU you should provide threshold upon which you can define the 'close enough' relation.


from froc_analysis import generate_froc_curve, generate_bootstrap_curves

# For single FROC curve
                    use_iou=False, iou_thres=.5, n_sample_points=75,
                    plot_title='FROC', plot_output_path='froc.png')

# For bootstrapped curves
                          use_iou=False, iou_thres=.5, n_sample_points=25,
                          plot_title='FROC', plot_output_path='froc.png')

CLI Usage

usage: [-h] [--bootstrap] --gt_ann GT_ANN --pred_ann PRED_ANN [--use_iou] [--iou_thres IOU_THRES] [--n_sample_points N_SAMPLE_POINTS] [--n_bootstrap_samples N_BOOTSTRAP_SAMPLES]
                        [--plot_title PLOT_TITLE] [--plot_output_path PLOT_OUTPUT_PATH]

optional arguments:
  -h, --help            show this help message and exit
  --bootstrap           Whether to do a single or bootstrap runs.
  --gt_ann GT_ANN
  --pred_ann PRED_ANN
  --use_iou             Use IoU score to decide on `proximity` rather then using center pixel inside GT box.
  --iou_thres IOU_THRES
                        If IoU score is used the default threshold is arbitrarily set to .5
  --n_sample_points N_SAMPLE_POINTS
                        Number of points to evaluate the FROC curve at.
  --n_bootstrap_samples N_BOOTSTRAP_SAMPLES
                        Number of bootstrap samples.
  --plot_title PLOT_TITLE
  --plot_output_path PLOT_OUTPUT_PATH

By default centroid closeness is used, if the --use_iou flag is set, --iou_thres defaults to .75 while the --score_thres score defaults to .5. The code outputs the FROC curve on the given detection results and GT dataset.

Running tests

python -m coverage run -m unittest -v
python -m coverage report -m

@Regards, Alex

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for coco-froc-analysis, version 0.0.29
Filename, size File type Python version Upload date Hashes
Filename, size coco_froc_analysis-0.0.29.tar.gz (10.5 kB) File type Source Python version None Upload date Hashes View
Filename, size coco_froc_analysis-0.0.29-py3-none-any.whl (16.1 kB) File type Wheel Python version py3 Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Huawei Huawei PSF Sponsor Microsoft Microsoft PSF Sponsor NVIDIA NVIDIA PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page