Skip to main content

A module for evaluating the predictions of the models trained on MEDS datasets.

Project description

MEDS Evaluation

Evaluation API for the MEDS Decentralized Extensible Validation MEDS-DEV Benchmark.

[!NOTE] This is a work-in-progress package and currently only supports evaluation of binary classification tasks.

Intended usage

MEDS Evaluation pipeline is intended to be used together with MEDS-DEV, but can also be adapted to use as a standalone package.

Please refer to the MEDS-DEV tutorial to learn how to extract and prepare the data in the MEDS format and obtain model predictions ready to be evaluated.

Prediction schema

Inputs to MEDS Evaluation must follow the prediction schema, which by default has five fields:

  1. subject_id: ID of the subject (patient) associated with the event
  2. prediction_time: time at which the prediction as being made
  3. boolean_value: ground truth boolean label for the prediction task
  4. predicted_boolean_value (optional): predicted boolean label generated by the model
  5. predicted_boolean_probability (optional): predicted probability logits generated by the model

This is equivalent to the following polars schema:

Schema(
    [
        ("subject_id", Int64),
        ("prediction_time", Datetime(time_unit="us")),
        ("boolean_value", Boolean),
        ("predicted_boolean_value", Boolean),
        ("predicted_boolean_probability", Float64),
    ]
)

Note that while predicted_boolean_value and predicted_boolean_probability are optional, at least one of them must be present and contain non-null values in order to generate the results. In addition, a schema can contain additional fields but at the moment these will not be used in MEDS Evaluation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meds_evaluation-0.0.1.tar.gz (15.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meds_evaluation-0.0.1-py3-none-any.whl (8.0 kB view details)

Uploaded Python 3

File details

Details for the file meds_evaluation-0.0.1.tar.gz.

File metadata

  • Download URL: meds_evaluation-0.0.1.tar.gz
  • Upload date:
  • Size: 15.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for meds_evaluation-0.0.1.tar.gz
Algorithm Hash digest
SHA256 a6b56dc2dfc48f4206c90e6e8c136b88f9c6dae5ce95fa3c96972cf710acdc5e
MD5 e488d71b866e56bfa6308d7f1e0585d3
BLAKE2b-256 c7c34d76615d2b61e8eb0204d0d7279504d11676af2e26c42aa8685cbc3b76dc

See more details on using hashes here.

Provenance

The following attestation bundles were made for meds_evaluation-0.0.1.tar.gz:

Publisher: publish-to-pypi.yaml on kamilest/meds-evaluation

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file meds_evaluation-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for meds_evaluation-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 fec72beae760b1d2bde0f66ee9e8f83ca55284b68a257c2bdfac164765dcd845
MD5 28d2cb02f3d3d1a2ab926ff9999d6e38
BLAKE2b-256 49b131e3085165e865b5ab8cea5ec87fe65d230f2a293db1c1fa3fa9efd2cb01

See more details on using hashes here.

Provenance

The following attestation bundles were made for meds_evaluation-0.0.1-py3-none-any.whl:

Publisher: publish-to-pypi.yaml on kamilest/meds-evaluation

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page