Skip to main content

Framework for Electronic Medical Records. A python package for building models using EHR data.

Project description

FEMR

Framework for Electronic Medical Records

FEMR is a Python package for manipulating longitudinal EHR data for machine learning, with a focus on supporting the creation of foundation models and verifying their presumed benefits in healthcare. Such a framework is needed given the current state of large language models in healthcare and the need for better evaluation frameworks.

The currently supported foundation models is MOTOR.

FEMR works with data that has been converted to the MEDS schema, a simple schema that supports a wide variety of EHR / claims datasets. Please see the MEDS documentation, and in particular its provided ETLs for help converting your data to MEDS.

FEMR helps users:

  1. Use ontologies to better understand / featurize medical codes
  2. Algorithmically label subject records based on structured data
  3. Generate tabular features from subject timelines for use with traditional gradient boosted tree models
  4. Train and finetune MOTOR-derived models for binary classification and prediction tasks.

We recommend users start with our tutorial folder

Installation

pip install femr

# If you are using deep learning, you also need to install xformers
#
# Note that xformers has some known issues with MacOS.
# If you are using MacOS you might also need to install llvm. See https://stackoverflow.com/questions/60005176/how-to-deal-with-clang-error-unsupported-option-fopenmp-on-travis
pip install xformers

Getting Started

The first step of using FEMR is to convert your subject data into MEDS, the standard input format expected by FEMR codebase.

Note: FEMR currently only supports MEDS v3, so you will need to install MEDS v3 versions of packages. Aka pip install meds-etl==0.3.11

The best way to do this is with the ETLs provided by MEDS.

OMOP Data

If you have OMOP CDM formated data, follow these instructions:

  1. Download your OMOP dataset to [PATH_TO_SOURCE_OMOP].
  2. Convert OMOP => MEDS using the following:
# Convert OMOP => MEDS data format
meds_etl_omop [PATH_TO_SOURCE_OMOP] [PATH_TO_OUTPUT_MEDS]
  1. Use HuggingFace's Datasets library to load our dataset in Python
dataset = datasets.Dataset.from_parquet(PATH_TO_OUTPUT_MEDS + 'data/*')

# Print dataset stats
print(dataset)
>>> Dataset({
>>>   features: ['subject_id', 'events'],
>>>   num_rows: 6732
>>> })

# Print number of events in first subject in dataset
print(len(dataset[0]['events']))
>>> 2287

Stanford STARR-OMOP Data

If you are using the STARR-OMOP dataset from Stanford (which uses the OMOP CDM), we add an initial Stanford-specific preprocessing step. Otherwise this should be identical to the OMOP Data section. Follow these instructions:

  1. Download your STARR-OMOP dataset to [PATH_TO_SOURCE_OMOP].
  2. Convert STARR-OMOP => MEDS using the following:
# Convert OMOP => MEDS data format
meds_etl_omop [PATH_TO_SOURCE_OMOP] [PATH_TO_OUTPUT_MEDS]_raw

# Apply Stanford fixes
femr_stanford_omop_fixer [PATH_TO_OUTPUT_MEDS]_raw [PATH_TO_OUTPUT_MEDS]
  1. Use HuggingFace's Datasets library to load our dataset in Python
dataset = datasets.Dataset.from_parquet(PATH_TO_OUTPUT_MEDS + 'data/*')

# Print dataset stats
print(dataset)
>>> Dataset({
>>>   features: ['subject_id', 'events'],
>>>   num_rows: 6732
>>> })

# Print number of events in first subject in dataset
print(len(dataset[0]['events']))
>>> 2287

Development

The following guides are for developers who want to contribute to FEMR.

Precommit checks

Before committing, please run the following commands to ensure that your code is formatted correctly and passes all tests.

Installation

conda install pre-commit pytest -y
pre-commit install

Running

Test Functions

pytest tests

Formatting Checks

pre-commit run --all-files

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

femr-0.2.4.tar.gz (1.7 MB view details)

Uploaded Source

Built Distribution

femr-0.2.4-py3-none-any.whl (60.6 kB view details)

Uploaded Python 3

File details

Details for the file femr-0.2.4.tar.gz.

File metadata

  • Download URL: femr-0.2.4.tar.gz
  • Upload date:
  • Size: 1.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for femr-0.2.4.tar.gz
Algorithm Hash digest
SHA256 3e8e4588bdf68b6d1a9224c6e90eb23b00e7e312a766a66fd180e13486f1b34e
MD5 522b3349c8288564e9ddb77ee3036b9c
BLAKE2b-256 3f9c0a2d954ff87cfae82592d8dd5a063e0817560a23907db68758e37e4d111d

See more details on using hashes here.

Provenance

The following attestation bundles were made for femr-0.2.4.tar.gz:

Publisher: build.yaml on som-shahlab/femr

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file femr-0.2.4-py3-none-any.whl.

File metadata

  • Download URL: femr-0.2.4-py3-none-any.whl
  • Upload date:
  • Size: 60.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for femr-0.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 525d56c06122167b1d67ccfdb6607b76e01093ea1e8842d6047003a48f81247d
MD5 b78ea9ddefdfaf85817fade25a6b6550
BLAKE2b-256 8a31ee09ff958784046800cd82885e42c1238263fab369ddfc11576210621dfc

See more details on using hashes here.

Provenance

The following attestation bundles were made for femr-0.2.4-py3-none-any.whl:

Publisher: build.yaml on som-shahlab/femr

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page