Skip to main content

Scalable Tabularization of MEDS format Time-Series data

Project description

Scalable tabularization and tabular feature usage utilities over generic MEDS datasets

Python PyPI Hydra Codecov Tests Code Quality Documentation Status Contributors Pull Requests License

This repository provides utilities and scripts to run limited automatic tabular ML pipelines for generic MEDS datasets.

Installation

To use MEDS-Tab, install the dependencies following commands below:

Pip Install

pip install meds-tab

Local Install

# clone the git repo
pip install .

Usage

This repository consists of two key pieces:

  1. Construction of and efficient loading of tabular (flat, non-longitudinal) summary features describing patient records in MEDS over arbitrary time-windows (e.g. 1 year, 6 months, etc.) either backwards or forwards in time from a given index date. Naturally, only "look-back" windows should be used for future-event prediction tasks, and are thus currently implemented.
  2. Running a basic XGBoost AutoML pipeline over these tabular features to predict arbitrary binary classification or regression downstream tasks defined over these datasets. The "AutoML" part of this is not particularly advanced -- what is more advanced is the efficient construction, storage, and loading of tabular features for the candidate AutoML models, enabling a far more extensive search over different featurization strategies.

Scripts and Examples

See tests/test_integration.py for an example of the end-to-end pipeline being run on synthetic data. This script is a functional test that is also run with pytest to verify the correctness of the algorithm.

For an end to end example over MIMIC-IV, see the companion repository For an end to end example over Philips eICU, see the eICU companion repository.

Core CLI Scripts Overview

  1. meds-tab-describe: This command processes MEDS data shards to compute the frequencies of different code-types

    • time-series codes (codes with timestamps)
    • time-series numerical values (codes with timestamps and numerical values)
    • static codes (codes without timestamps)
    • static numerical codes (codes without timestamps but with numerical values).

    Caching feature names and frequencies in a dataset stored in "path_to_data"

    meds-tab-describe MEDS_cohort_dir="path_to_data"
    
  2. meds-tab-tabularize-static: Filters and processes the dataset based on the frequency of codes, generating a tabular vector for each patient at each timestamp in the shards. Each row corresponds to a unique patient_id and timestamp combination, thus rows are duplicated across multiple timestamps for the same patient.

    Tabularizing static data with the minimum code frequency of 10 and window sizes of [1d, 30d, 365d, full] and value aggregation methods of [static/present, code/count, value/count, value/sum, value/sum_sqd, value/min, value/max]

    meds-tab-tabularize-static MEDS_cohort_dir="path_to_data" \
                                tabularization.min_code_inclusion_frequency=10 \
                                tabularization.window_sizes=[1d,30d,365d,full] \
                                do_overwrite=False \
                                tabularization.aggs=[static/present,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]"
    
  3. meds-tab-tabularize-time-series: Iterates through combinations of a shard, window_size, and aggregation to generate feature vectors that aggregate patient data for each unique patient_id x timestamp. This stage (and the previous stage) use sparse matrix formats to efficiently handle the computational and storage demands of rolling window calculations on large datasets. We support parallelization through Hydra's --multirun flag and the joblib launcher.

    Aggregates time-series data on features across different window_sizes

    meds-tab-tabularize-time-series --multirun \
                                    worker="range(0,$N_PARALLEL_WORKERS)" \
                                    hydra/launcher=joblib \
                                    MEDS_cohort_dir="path_to_data" \
                                    tabularization.min_code_inclusion_frequency=10 \
                                    do_overwrite=False \
                                    tabularization.window_sizes=[1d,30d,365d,full] \
                                    tabularization.aggs=[static/present,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]
    
  4. meds-tab-cache-task: Aligns task-specific labels with the nearest prior event in the tabularized data. It requires a labeled dataset directory with three columns (patient_id, timestamp, label) structured similarly to the MEDS_cohort_dir.

    Aligh tabularized data for a specific task $TASK and labels that has pulled from ACES

    meds-tab-cache-task MEDS_cohort_dir="path_to_data" \
                        task_name=$TASK \
                        tabularization.min_code_inclusion_frequency=10 \
                        do_overwrite=False \
                        tabularization.window_sizes=[1d,30d,365d,full] \
                        tabularization.aggs=[static/present,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max]
    
  5. meds-tab-xgboost: Trains an XGBoost model using user-specified parameters. Permutations of window_sizes and aggs can be generated using generate-permutations command (See the section below for descriptions).

    meds-tab-xgboost --multirun \
                      MEDS_cohort_dir="path_to_data" \
                      task_name=$TASK \
                      output_dir="output_directory" \
                      tabularization.min_code_inclusion_frequency=10 \
                      tabularization.window_sizes=$(generate-permutations [1d,30d,365d,full]) \
                      do_overwrite=False \
                      tabularization.aggs=$(generate-permutations [static/present,code/count,value/count,value/sum,value/sum_sqd,value/min,value/max])
    
  6. meds-tab-xgboost-sweep: Conducts an Optuna hyperparameter sweep to optimize over window_sizes, aggregations, and min_code_inclusion_frequency, aiming to enhance model performance and adaptability.

Additional CLI Scripts

  1. generate-permutations: Generates and prints a sorted list of all permutations from a comma separated input. This is provided for the convenience of sweeping over all possible combinations of window sizes and aggregations.

    For example you can directly call generate-permutations in the command line:

    generate-permutations [2,3,4]
    [2], [2, 3], [2, 3, 4], [2, 4], [3], [3, 4], [4] 
    

    This could be used in the command line in concert with other calls. For example, the following call:

    meds-tab-xgboost --multirun tabularization.window_sizes=$(generate-permutations [1d,2d,7d,full])
    

    would resolve to:

    meds-tab-xgboost --multirun tabularization.window_sizes=[1d],[1d,2d],[1d,2d,7d],[1d,2d,7d,full],[1d,2d,full],[1d,7d],[1d,7d,full],[1d,full],[2d],[2d,7d],[2d,7d,full],[2d,full],[7d],[7d,full],[full]
    

How does MEDS-Tab Work?

What do you mean "tabular pipelines"? Isn't all structured EHR data already tabular?

This is a common misconception. Tabular data refers to data that can be organized in a consistent, logical set of rows/columns such that the entirety of a "sample" or "instance" for modeling or analysis is contained in a single row, and the set of columns possibly observed (there can be missingness) is consistent across all rows. Structured EHR data does not satisfy this definition, as we will have different numbers of observations of medical codes and values at different timestamps for different patients, so it cannot simultanesouly satisfy the (1) "single row single instance", (2) "consistent set of columns", and (3) "logical" requirements. Thus, in this pipeline, when we say we will produce a "tabular" view of MEDS data, we mean a dataset that can realize these constraints, which will explicitly involve summarizing the patient data over various historical or future windows in time to produce a single row per patient with a consistent, logical set of columns (though there may still be missingness).

Implementation Improvements

Computational Performance vs. Existing Pipelines

XGBoost Performance

XGBoost Model Performance on MIMIC-IV

XGBoost Model Performance on Philips eICU

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meds_tab-0.0.2.tar.gz (49.1 kB view hashes)

Uploaded Source

Built Distribution

meds_tab-0.0.2-py3-none-any.whl (43.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page