Skip to main content

Gesund.ai package for running validation metrics for classification, semantic segmentation, instance segmentation, and object detection models.

Project description

Validation Metrics Library

Overview

This library provides tools for calculating validation metrics for predictions and annotations in machine learning workflows. It includes a command-line tool for computing and displaying validation metrics.

Installation

To use this library, ensure you have the necessary dependencies installed in your environment. You can install them via pip:

pip install .

Usage

Command-Line Tool

The primary script for running validation metrics is run_metrics.py. This script calculates validation metrics based on JSON files containing predictions and annotations.

Arguments

  • annotations (required): Path to the JSON file containing annotation data.
  • predictions (required): Path to the JSON file containing prediction data.
  • class_mappings (required): Path to the JSON file containing class_mappings data.
  • --output (optional): Path to the file where the results will be saved. If not provided, the results will be printed to the console.

Example

  1. Basic Usage: Print metrics to the console

    python run_metrics.py path/to/annotations.json path/to/predictions.json
    
  2. Save Metrics to File: Save metrics to a specified file

    python -m scripts.run_metrics --annotations test_data/test_annotations_classification.json --predictions test_data/test_predictions_classification.json --class_mappings test_data/test_class_mappings.json --output ./testing.json
    

This command will execute the metrics calculation and save the results to path/to/output.json. If the --output flag is not provided, the results will be displayed directly in the console.

Example JSON Inputs

  • Annotations (test_annotations_classification.json):

    {
    "664df1bf782d9eb107789013": {
      "image_id": "664df1bf782d9eb107789013",
      "annotation": [
        {
          "id": "664dfb2085d8059c72b7b24a",
          "label": 0
        }
      ]
    },
    
    "664df1bf782d9eb107789015": {
      "image_id": "664df1bf782d9eb107789015",
      "annotation": [
        {
          "id": "664dfb2085d8059c72b7b24d",
          "label": 1
        }
      ]
    },
    ...
    }
    
  • Predictions (test_predictions_classification.json):

    {
    "664df1bf782d9eb107789013": {
      "image_id": "664df1bf782d9eb107789013",
      "prediction_class": 1,
      "confidence": 0.731047693767988,
      "logits": [
        0.2689523062320121,
        0.731047693767988
      ],
      "loss": 16.11764907836914
    },
    
    "664df1bf782d9eb107789015": {
      "image_id": "664df1bf782d9eb107789015",
      "prediction_class": 1,
      "confidence": 0.7308736572776326,
      "logits": [
        0.26912634272236735,
        0.7308736572776326
      ],
      "loss": 0.007578411139547825
    },
    ...
    
  • Class Mappings (test_class_mappings.json):

    {"0": "normal", "1": "pneumonia"}
    

Example Outputs

Console Output

When results are printed to the console, they will be in the following format:

Validation Metrics:
----------------------------------------
Accuracy:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Micro F1:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Macro F1:
    Validation: 0.4000
    Confidence_Interval: 0.2303 to 0.5697
----------------------------------------
AUC:
    Validation: 0.3996
    Confidence_Interval: 0.2299 to 0.5693
----------------------------------------
Precision:
    Validation: 0.4343
    Confidence_Interval: 0.2625 to 0.6060
----------------------------------------
Sensitivity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Specificity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Matthews C C:
    Validation: -0.1089
    Confidence_Interval: 0.0010 to 0.2168
----------------------------------------

Output File

If the --output flag is used, the metrics will be saved in the specified file path. The format of the saved file will be the same as the console output.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gesund_val_library-0.1.3.tar.gz (43.4 kB view details)

Uploaded Source

Built Distribution

gesund_val_library-0.1.3-py3-none-any.whl (61.3 kB view details)

Uploaded Python 3

File details

Details for the file gesund_val_library-0.1.3.tar.gz.

File metadata

  • Download URL: gesund_val_library-0.1.3.tar.gz
  • Upload date:
  • Size: 43.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.10

File hashes

Hashes for gesund_val_library-0.1.3.tar.gz
Algorithm Hash digest
SHA256 7275be2457c9c974cf6fcc0ab86f6e878bd83398122b0ef1d1db1ea0860644e9
MD5 e6fdd916060ab86bc205ebdb074e52bd
BLAKE2b-256 a45eb93bc2e3eed61f501098868fd59cc9d741c3c9b4d2a2b3f27059253b4ff8

See more details on using hashes here.

File details

Details for the file gesund_val_library-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for gesund_val_library-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 5a863ca378fe6f2c8a5d54d0bc6595534554b4eed0b16d009603a7e1041b45f9
MD5 a67903d06b7f6e5ee740932b0795b7f8
BLAKE2b-256 13710d38e998226c51e52061e907a40784a4e2abc2af6daf74d62baa71d93684

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page