Skip to main content

Gesund.ai package for running validation metrics for classification, semantic segmentation, instance segmentation, and object detection models.

Project description

Validation Metrics Library

Overview

This library provides tools for calculating validation metrics for predictions and annotations in machine learning workflows. It includes a command-line tool for computing and displaying validation metrics.

Installation

To use this library, ensure you have the necessary dependencies installed in your environment. You can install them via pip:

pip install .

Usage

Command-Line Tool

The primary script for running validation metrics is run_metrics.py. This script calculates validation metrics based on JSON files containing predictions and annotations.

Arguments

  • annotations (required): Path to the JSON file containing annotation data.
  • predictions (required): Path to the JSON file containing prediction data.
  • class_mappings (required): Path to the JSON file containing class_mappings data.
  • --output (optional): Path to the file where the results will be saved. If not provided, the results will be printed to the console.

Example

  1. Basic Usage: Print metrics to the console

    python run_metrics.py path/to/annotations.json path/to/predictions.json
    
  2. Save Metrics to File: Save metrics to a specified file

    python -m scripts.run_metrics --annotations test_data/test_annotations_classification.json --predictions test_data/test_predictions_classification.json --class_mappings test_data/class_mappings.json --output path/to/output.json
    

This command will execute the metrics calculation and save the results to path/to/output.json. If the --output flag is not provided, the results will be displayed directly in the console.

Example JSON Inputs

  • Annotations (test_annotations_classification.json):

    {
    "664df1bf782d9eb107789013": {
      "image_id": "664df1bf782d9eb107789013",
      "annotation": [
        {
          "id": "664dfb2085d8059c72b7b24a",
          "label": 0
        }
      ]
    },
    
    "664df1bf782d9eb107789015": {
      "image_id": "664df1bf782d9eb107789015",
      "annotation": [
        {
          "id": "664dfb2085d8059c72b7b24d",
          "label": 1
        }
      ]
    },
    ...
    }
    
  • Predictions (test_predictions_classification.json):

    {
    "664df1bf782d9eb107789013": {
      "image_id": "664df1bf782d9eb107789013",
      "prediction_class": 1,
      "confidence": 0.731047693767988,
      "logits": [
        0.2689523062320121,
        0.731047693767988
      ],
      "loss": 16.11764907836914
    },
    
    "664df1bf782d9eb107789015": {
      "image_id": "664df1bf782d9eb107789015",
      "prediction_class": 1,
      "confidence": 0.7308736572776326,
      "logits": [
        0.26912634272236735,
        0.7308736572776326
      ],
      "loss": 0.007578411139547825
    },
    ...
    
  • Class Mappings (test_class_mappings.json):

    {"0": "normal", "1": "pneumonia"}
    

Example Outputs

Console Output

When results are printed to the console, they will be in the following format:

Validation Metrics:
----------------------------------------
Accuracy:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Micro F1:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Macro F1:
    Validation: 0.4000
    Confidence_Interval: 0.2303 to 0.5697
----------------------------------------
AUC:
    Validation: 0.3996
    Confidence_Interval: 0.2299 to 0.5693
----------------------------------------
Precision:
    Validation: 0.4343
    Confidence_Interval: 0.2625 to 0.6060
----------------------------------------
Sensitivity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Specificity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Matthews C C:
    Validation: -0.1089
    Confidence_Interval: 0.0010 to 0.2168
----------------------------------------

Output File

If the --output flag is used, the metrics will be saved in the specified file path. The format of the saved file will be the same as the console output.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gesund_val_library-0.1.0.tar.gz (41.8 kB view details)

Uploaded Source

Built Distribution

gesund_val_library-0.1.0-py3-none-any.whl (23.2 kB view details)

Uploaded Python 3

File details

Details for the file gesund_val_library-0.1.0.tar.gz.

File metadata

  • Download URL: gesund_val_library-0.1.0.tar.gz
  • Upload date:
  • Size: 41.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.8.10

File hashes

Hashes for gesund_val_library-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b75fc74d30423ab4b3dc326c7da3aa4fabcca7d458fd84d2539a0683a70dc8b4
MD5 908c9ddeeb066d18bd8ad3284cebacea
BLAKE2b-256 d977d1a7d6e0e0437c1059592988b64945513e32c65005a088b4236e30f607d4

See more details on using hashes here.

File details

Details for the file gesund_val_library-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for gesund_val_library-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 54fcdf3dd6c68f377aa70bfa8b47ba72092a4a4b614e7c7db2fb04c5435049a0
MD5 65b2a737089001623abb50a934d1928f
BLAKE2b-256 f52b8d8bffb669aa8feda2d854228db5cef20339a58636765771b4e9f8779e20

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page