Skip to main content

A Python library for healthcare AI

Project description

Deployment & Documentation & Stats

PyPI version Documentation status MyBinder GitHub stars GitHub forks Downloads Downloads

Build Status & Coverage & Maintainability & License

Build Status Circle CI Build status Maintainability License
PyHealth Logo

Development Status: As of 11/09/2020, PyHealth is under active development and in its alpha stage. Please follow, star, and fork to get the latest functions!

PyHealth is a comprehensive Python package for healthcare AI, designed for both ML researchers and healthcare and medical practitioners. PyHealth accepts diverse healthcare data such as longitudinal electronic health records (EHRs), continuous signials (ECG, EEG), and clinical notes (to be added), and supports various predictive modeling methods using deep learning and other advanced machine learning algorithms published in the literature.

The library is proudly developed and maintained by researchers from Carnegie Mellon University, IQVIA, and University of Illinois at Urbana-Champaign. PyHealth makes many important healthcare tasks become accessible, such as phenotyping prediction, mortality prediction, and ICU length stay forecasting, etc. Running these prediction tasks with deep learning models can be as short as 10 lines of code in PyHealth.

PyHealth comes with three major modules: (i) data preprocessing module; (ii) learning module and (iii) evaluation module. Typically, one can run the data prep module to prepare the data, then feed to the learning module for model training and prediction, and finally assess the results with the evaluation module. Users can use the full system as mentioned or just selected modules based on their own needs:

  • Deep learning researchers may directly use the processed data along with the proposed new models.

  • Healthcare and Medical personnel, may leverage our data preprocessing module to convert the medical data to the format that machine learning models could digest, and then perform the inference tasks to get insights from the data. This package can support them in various health analytics tasks including disease detection, risk prediction, patient subtyping, health monitoring, etc.

PyHealth is featured for:

  • Unified APIs, detailed documentation, and interactive examples across various types of datasets and algorithms.

  • Advanced models, including latest deep learning models and classical machine learning models.

  • Wide coverage, supporting sequence data, image data, series data and text data like clinical notes.

  • Optimized performance with JIT and parallelization when possible, using numba and joblib.

  • Customizable modules and flexible design: each module may be turned on/off or totally replaced by custom functions. The trained models can be easily exported and reloaded for fast execution and deployment.

API Demo for LSTM on Phenotyping Prediction:

# load pre-processed CMS dataset
from pyhealth.data.expdata_generator import sequencedata as expdata_generator

expdata_id = '2020.0810.data.mortality.mimic'
cur_dataset = expdata_generator(exp_id=exp_id)
cur_dataset.get_exp_data(sel_task='mortality', )
cur_dataset.load_exp_data()

# initialize the model for training
from pyhealth.models.sequence.lstm import LSTM
# enable GPU
expmodel_id = 'test.model.lstm.0001'
clf = LSTM(expmodel_id=expmodel_id, n_batchsize=20, use_gpu=True, n_epoch=100)
clf.fit(cur_dataset.train, cur_dataset.valid)

# load the best model for inference
clf.load_model()
clf.inference(cur_dataset.test)
pred_results = clf.get_results()

# evaluate the model
from pyhealth.evaluation.evaluator import func
r = func(pred_results['hat_y'], pred_results['y'])
print(r)

Citing PyHealth:

PyHealth paper is under review at JMLR (machine learning open-source software track). If you use PyHealth in a scientific publication, we would appreciate citations to the following paper:

@article{zhao2020pyhealth,
  author  = {Zhao, Yue and Qiao, Zhi and Xiao, Cao and Glass, Lucas and Hu, Xiyang and Sun, Jimeng},
  title   = {PyHealth: A Python Library for Healthcare AI},
  year    = {2020},
}

or:

Zhao, Y., Qiao, Z., Xiao, C., Glass, L., Hu, X and Sun, J., 2020. PyHealth: A Python Library for Healthcare AI.

Key Links and Resources:

Table of Contents:


Installation

It is recommended to use pip for installation. Please make sure the latest version is installed, as PyHealth is updated frequently:

pip install pyhealth            # normal install
pip install --upgrade pyhealth  # or update if needed
pip install --pre pyhealth      # or include pre-release version for new features

Alternatively, you could clone and run setup.py file:

git clone https://github.com/yzhao062/pyhealth.git
cd pyhealth
pip install .

Required Dependencies:

  • Python 3.5, 3.6, or 3.7

  • combo>=0.0.8

  • joblib

  • numpy>=1.13

  • numba>=0.35

  • pandas>=0.25

  • scipy>=0.20

  • scikit_learn>=0.20

  • tqdm

  • torch (this should be installed manually)

  • xgboost (this should be installed manually)

  • xlrd >= 1.0.0

  • zipfile36

  • PyWavelets

  • torch

  • torchvision

  • xgboost

Warning 1: PyHealth has multiple neural network based models, e.g., LSTM, which are implemented in PyTorch. However, PyHealth does NOT install these DL libraries for you. This reduces the risk of interfering with your local copies. If you want to use neural-net based models, please make sure PyTorch is installed. Similarly, models depending on xgboost, would NOT enforce xgboost installation by default.


API Cheatsheet & Reference

Full API Reference: (https://pyhealth.readthedocs.io/en/latest/pyhealth.html). API cheatsheet for most learning models:

  • fit(X_train, X_valida): Fit a learning model.

  • inference(X): Predict on X using the fitted estimator.

  • evaluator(y, y^hat): Model evaluation.

Model load and reload:

  • load_model(): Load the best model so far.

Preprocessed Datasets & Implemented Algorithms

(i) Preprocessed Datasets (customized data preprocessing function is provided in the example folders):

Type

Abbr

Description

Processed Function

Link

Sequence: EHR-ICU

MIMIC III

A relational database containing tables of data relating to patients who stayed within ICU.

\examples\data_generation\dataloader_mimic

https://mimic.physionet.org/gettingstarted/overview/

Sequence: EHR-ICU

MIMIC_demo

The MIMIC-III demo database is limited to 100 patients and excludes the noteevents table.

\examples\data_generation\dataloader_mimic_demo

https://mimic.physionet.org/gettingstarted/demo/

Sequence: EHU-Claim

CMS

DE-SynPUF: CMS 2008-2010 Data Entrepreneurs Synthetic Public Use File

\examples\data_generation\dataloader_cms

https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs

Image: Chest X-ray

Pediatric

Pediatric Chest X-ray Pneumonia (Bacterial vs Viral vs Normal) Dataset

N/A

https://academictorrents.com/details/951f829a8eeb4d2839c4a535db95078a9175010b

Series: ECG

PhysioNet

AF Classification from a short single lead ECG recording Dataset.

N/A

https://archive.physionet.org/challenge/2017/#challenge-data

You may download the above datasets at the links. The structure of the generated datasets can be found in datasets folder:

  • \datasets\cms\x_data...csv

  • \datasets\cms\y_data\phenotyping.csv

  • \datasets\cms\y_data\mortality.csv

The processed datasets (X,y) should be put in x_data, y_data correspondingly, to be appropriately digested by deep learning models. We include some sample datasets under \datasets folder.

(ii) Machine Learning and Deep Learning Models :

Type

Abbr

Class

Algorithm

Year

Ref

Classical Models

RandomForest

pyhealth.models.sequence.rf

Random Forests

2000

[2]

Classical Models

XGBoost

pyhealth.models.sequence.xgboost

XGBoost: A scalable tree boosting system

2016

[3]

Neural Networks

LSTM

pyhealth.models.sequence.lstm

Long short-term memory

1997

[7]

Neural Networks

GRU

pyhealth.models.sequence.gru

Gated recurrent unit

2014

[4]

Neural Networks

RETAIN

pyhealth.models.sequence.retain

RETAIN: An Interpretable Predictive Model for Healthcare using Reverse Time Attention Mechanism

2016

[5]

Neural Networks

Dipole

pyhealth.models.sequence.dipole

Dipole: Diagnosis Prediction in Healthcare via Attention-based Bidirectional Recurrent Neural Networks

2017

[8]

Neural Networks

tLSTM

pyhealth.models.sequence.tlstm

Patient Subtyping via Time-Aware LSTM Networks

2017

[1]

Neural Networks

RAIM

pyhealth.models.sequence.raim

RAIM: Recurrent Attentive and Intensive Model of Multimodal Patient Monitoring Data

2018

[9]

Neural Networks

StageNet

pyhealth.models.sequence.stagenet

StageNet: Stage-Aware Neural Networks for Health Risk Prediction

2020

[6]

Examples of running ML and DL models can be found below, or directly at \examples\learning_examples\

(iii) Evaluation Metrics :

Type

Abbr

Metric

Method

Binary Classification

average_precision_score

Compute micro/macro average precision (AP) from prediction scores

pyhealth.evaluation.xxx.get_avg_results

Binary Classification

roc_auc_score

Compute micro/macro ROC AUC score from prediction scores

pyhealth.evaluation.xxx.get_avg_results

Binary Classification

recall, precision, f1

Get recall, precision, and f1 values

pyhealth.evaluation.xxx.get_predict_results

Multi Classification

To be done here

(iv) Supported Tasks:

Type

Abbr

Description

Method

Multi-classification

phenotyping

Predict the diagnosis code of a patient based on other information, e.g., procedures

\examples\data_generation\generate_phenotyping_xxx.py

Binary Classification

mortality prediction

Predict whether a patient may pass away during the hospital

\examples\data_generation\generate_mortality_xxx.py

Regression

ICU stay length pred

Forecast the length of an ICU stay

\examples\data_generation\generate_icu_length_xxx.py

Quick Start for Data Processing

We propose the idea of standard template, a formalized schema for healthcare datasets. Ideally, as long as the data is scanned as the template we defined, the downstream task processing and the use of ML models will be easy and standard. In short, it has the following structure: add a figure here. The dataloader for different datasets can be found in examples/data_generation. Using “examples/data_generation/dataloader_mimic_demo.py” as an exmaple:

  1. First read in patient, admission, and event tables.

    from pyhealth.utils.utility import read_csv_to_df
    patient_df = read_csv_to_df(os.path.join('data', 'mimic-iii-clinical-database-demo-1.4', 'PATIENTS.csv'))
    admission_df = read_csv_to_df(os.path.join('data', 'mimic-iii-clinical-database-demo-1.4', 'ADMISSIONS.csv'))
    ...
  2. Then invoke the parallel program to parse the tables in n_jobs cores.

    from pyhealth.data.base_mimic import parallel_parse_tables
    all_results = Parallel(n_jobs=n_jobs, max_nbytes=None, verbose=True)(
    delayed(parallel_parse_tables)(
         patient_df=patient_df,
         admission_df=admission_df,
         icu_df=icu_df,
         event_df=event_df,
         event_mapping_df=event_mapping_df,
         duration=duration,
         save_dir=save_dir)
     for i in range(n_jobs))
  3. The processed sequential data will be saved in the prespecified directory.

    with open(patient_data_loc, 'w') as outfile:
        json.dump(patient_data_list, outfile)

The provided examples in PyHealth mainly focus on scanning the data tables in the schema we have, and generate episode datasets. For instance, “examples/data_generation/dataloader_mimic_demo.py” demonstrates the basic procedure of processing MIMIC III demo datasets.

  1. The next step is to generate episode/sequence data for mortality prediction. See “examples/data_generation/generate_mortality_prediction_mimic_demo.py”

    with open(patient_data_loc, 'w') as outfile:
        json.dump(patient_data_list, outfile)

By this step, the dataset has been processed for generating X, y for phenotyping prediction. It is noted that the API across most datasets are similar. One may easily replicate this procedure by calling the data generation scripts in \examples\data_generation. You may also modify the parameters in the scripts to generate the customized datasets.

Preprocessed datasets are also available at \datasets\cms and \datasets\mimic.


Quick Start for Running Predictive Models

Before running examples, you need the datasets. Please download from the GitHub repository “datasets”. You can either unzip them manually or running our script “00_extract_data_run_before_learning.py”

“examples/learning_models/example_sequence_gpu_mortality.py” demonstrates the basic API of using GRU for mortality prediction. It is noted that the API across all other algorithms are consistent/similar.

If you do not have the preprocessed datasets yet, download the \datasets folder (cms.zip and mimic.zip) from PyHealth repository, and run \examples\learning_models\extract_data_run_before_learning.py to prepare/unzip the datasets.

  1. Setup the datasets. X and y should be in x_data and y_data, respectively.

    # load pre-processed CMS dataset
    from pyhealth.data.expdata_generator import sequencedata as expdata_generator
    
    expdata_id = '2020.0810.data.mortality.mimic'
    cur_dataset = expdata_generator(exp_id=exp_id)
    cur_dataset.get_exp_data(sel_task='mortality', )
    cur_dataset.load_exp_data()
  2. Initialize a LSTM model, you may set up the parameters of the LSTM, e.g., n_epoch, learning_rate, etc,.

    # initialize the model for training
    from pyhealth.models.sequence.lstm import LSTM
    # enable GPU
    expmodel_id = 'test.model.lstm.0001'
    clf = LSTM(expmodel_id=expmodel_id, n_batchsize=20, use_gpu=True, n_epoch=100)
    clf.fit(cur_dataset.train, cur_dataset.valid)
  3. Load the best shot of the training, predict on the test datasets

    # load the best model for inference
    clf.load_model()
    clf.inference(cur_dataset.test)
    pred_results = clf.get_results()
  4. Evaluation on the model. Multiple metrics are supported.

    # evaluate the model
    from pyhealth.evaluation.evaluator import func
    r = func(pred_results['hat_y'], pred_results['y'])
    print(r)

Algorithm Benchmark

The comparison among of implemented models will be made available later with a benchmark paper. TBA soon :)

Blueprint & Development Plan

The long term goal of PyHealth is to become a comprehensive healthcare AI toolkit that supports beyond EHR data, but also the images and clinical notes.

  • The compatibility and the support of OMOP format datasets

  • Model persistence (save, load, and portability)

  • The release of a benchmark paper with PyHealth


Reference

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyhealth-0.0.5.tar.gz (31.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page