Skip to main content

Gesund.ai package for running validation metrics for classification, semantic segmentation, instance segmentation, and object detection models.

Project description

Validation Metrics Library

Overview

This library provides tools for calculating validation metrics for predictions and annotations in machine learning workflows. It includes a command-line tool for computing and displaying validation metrics.

Installation

To use this library, ensure you have the necessary dependencies installed in your environment. You can install them via pip:

pip install .

Usage

Command-Line Tool

The primary script for running validation metrics is run_metrics.py. This script calculates validation metrics based on JSON files containing predictions and annotations.

Arguments

  • annotations (required): Path to the JSON file containing annotation data.
  • predictions (required): Path to the JSON file containing prediction data.
  • class_mappings (required): Path to the JSON file containing class_mappings data.
  • --output (optional): Path to the file where the results will be saved. If not provided, the results will be printed to the console.

Example

  1. Basic Usage: Print metrics to the console

    python run_metrics.py path/to/annotations.json path/to/predictions.json
    
  2. Save Metrics to File: Save metrics to a specified file

    python -m scripts.run_metrics --annotations test_data/test_annotations_classification.json --predictions test_data/test_predictions_classification.json --class_mappings test_data/test_class_mappings.json --output ./testing.json
    

This command will execute the metrics calculation and save the results to path/to/output.json. If the --output flag is not provided, the results will be displayed directly in the console.

Example JSON Inputs

  • Annotations (test_annotations_classification.json):

    {
    "664df1bf782d9eb107789013": {
      "image_id": "664df1bf782d9eb107789013",
      "annotation": [
        {
          "id": "664dfb2085d8059c72b7b24a",
          "label": 0
        }
      ]
    },
    
    "664df1bf782d9eb107789015": {
      "image_id": "664df1bf782d9eb107789015",
      "annotation": [
        {
          "id": "664dfb2085d8059c72b7b24d",
          "label": 1
        }
      ]
    },
    ...
    }
    
  • Predictions (test_predictions_classification.json):

    {
    "664df1bf782d9eb107789013": {
      "image_id": "664df1bf782d9eb107789013",
      "prediction_class": 1,
      "confidence": 0.731047693767988,
      "logits": [
        0.2689523062320121,
        0.731047693767988
      ],
      "loss": 16.11764907836914
    },
    
    "664df1bf782d9eb107789015": {
      "image_id": "664df1bf782d9eb107789015",
      "prediction_class": 1,
      "confidence": 0.7308736572776326,
      "logits": [
        0.26912634272236735,
        0.7308736572776326
      ],
      "loss": 0.007578411139547825
    },
    ...
    
  • Class Mappings (test_class_mappings.json):

    {"0": "normal", "1": "pneumonia"}
    

Example Outputs

Console Output

When results are printed to the console, they will be in the following format:

Validation Metrics:
----------------------------------------
Accuracy:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Micro F1:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Macro F1:
    Validation: 0.4000
    Confidence_Interval: 0.2303 to 0.5697
----------------------------------------
AUC:
    Validation: 0.3996
    Confidence_Interval: 0.2299 to 0.5693
----------------------------------------
Precision:
    Validation: 0.4343
    Confidence_Interval: 0.2625 to 0.6060
----------------------------------------
Sensitivity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Specificity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Matthews C C:
    Validation: -0.1089
    Confidence_Interval: 0.0010 to 0.2168
----------------------------------------

Output File

If the --output flag is used, the metrics will be saved in the specified file path. The format of the saved file will be the same as the console output.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gesund_val_library-0.1.1.tar.gz (20.5 kB view details)

Uploaded Source

Built Distribution

gesund_val_library-0.1.1-py3-none-any.whl (24.1 kB view details)

Uploaded Python 3

File details

Details for the file gesund_val_library-0.1.1.tar.gz.

File metadata

  • Download URL: gesund_val_library-0.1.1.tar.gz
  • Upload date:
  • Size: 20.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.10

File hashes

Hashes for gesund_val_library-0.1.1.tar.gz
Algorithm Hash digest
SHA256 235df21f4c3f429696598c45cddc1f080e546f9174a981861b8a4b77a55ac19b
MD5 a0559be0c85731c7e6ce6e13170d0107
BLAKE2b-256 3c52c139c066b1ab966d9d5367c952664b154c1b4dd776fcdd859f144fe1da92

See more details on using hashes here.

File details

Details for the file gesund_val_library-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for gesund_val_library-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 179149ae498338fe598e2e3a641c10c5a5351cdd1af2e01dfa5237ab18b155bf
MD5 c12fe2e70f8af9666c25fa04479b6c45
BLAKE2b-256 16dbd43f75d2fd56360ffb1eceeea02e911cec9140dfcb90136ada8b70f1141a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page