AoUPRS is a Python module for calculating Polygenic Risk Scores (PRS) specific to the All of Us study
Project description
AoUPRS
Overview
AoUPRS is a Python module designed for calculating Polygenic Risk Scores (PRS) specific to the All of Us study. This tool leverages Hail, a scalable framework for exploring and analyzing genomic data, to provide efficient PRS calculations.
AoUPRS provides 2 different approaches for PRS calculation [Check the publication, currently under review, for more details]:
Approach 1: Using Hail Dense MatrixTable (MT)
Approach 2: Using Hail Sparse Variant Dataset (VDS)
Installation
To install AoUPRS from GitHub, run the following command:
pip install AoUPRS
Dependencies
AoUPRS requires the following Python packages:
- hail
- gcsfs
- pandas
These dependencies will be installed automatically when you install AoUPRS.
Usage
-
Setup your AoU cloud analysis environment by selecting the "Hail Genomic Analysis" environment and allocating the required resources.
How to set up a Dataproc cluster:
-
Hail MT: Requires more resources. From our experience, you need to allocate 300 workers. It's expensive but you end up saving time and money because the kernel crashes with lower resources.
Cost when running: $72.91 per hour
Main node: 4CPUs, 15GB RAM, 150 GB Disk
Workers (300): 4CPUs, 15GB RAM, 150GB Disk -
Hail VDS: The default resources will mostly suffice, but if you have a big score and want to run it faster, use preemptible workers which are much cheaper.
Cost when running: $0.73 per hour
Main node: 4CPUs, 15GB RAM, 150 GB Disk
Workers (2): 4CPUs, 15GB RAM, 150GB Disk
-
** AoUPRS gives you the option to save the output files locally or to the cloud. We recommend always saving to the cloud as the local files will be deleted with the deletion of the Hail environment.
-
If you wish to query the Variant Annotation Table before calculating a PRS from Hail VDS to include only variants present in the callset, follow this notebook.
-
Importing the Packages
To use AoUPRS, first import the package:
import AoUPRS
import os
import pandas as pd
import numpy as np
from datetime import datetime
import gcsfs
import glob
import hail as hl
- Initiate Hail
hl.init(tmp_dir='hail_temp/', default_reference='GRCh38')
- Define Bucket
bucket = os.getenv("WORKSPACE_BUCKET")
- Read Hail MT / VDS
# Hail MT
mt_wgs_path = os.getenv("WGS_ACAF_THRESHOLD_MULTI_HAIL_PATH")
mt = hl.read_matrix_table(mt_wgs_path)
# Hail VDS
vds_srwgs_path = os.getenv("WGS_VDS_PATH")
vds = hl.vds.read_vds(vds_srwgs_path)
-
Drop Flagged srWGS samples
AoU provides a table listing samples that are flagged as part of the sample outlier QC for the srWGS SNP and Indel joint callset.Read more: How the All of Us Genomic data are organized
# Read flagged samples
flagged_samples_path = "gs://fc-aou-datasets-controlled/v7/wgs/short_read/snpindel/aux/relatedness/relatedness_flagged_samples.tsv"
# Save flagged samples locally
!gsutil -u $$GOOGLE_PROJECT cat $flagged_samples_path > flagged_samples.cvs
# Import flagged samples into a hail table
flagged_samples = hl.import_table(flagged_samples_path, key='sample_id')
# Drop flagged sample from main Hail
## If Hail MT
mt = mt.anti_join_cols(flagged_samples)
## If Hail VDS:
vds_no_flag = hl.vds.filter_samples(vds, flagged_samples, keep=False)
- Define the sample
# For MT:
## Convert the subset_sample_ids to a Python set
subset_sample_ids_set = set(map(str, sample_ids['person_id'].tolist()))
## Filter samples
mt = mt.filter_cols(hl.literal(subset_sample_ids_set).contains(mt.s))
# For VDS:
## Import the sample as a Hail table
sample_needed_ht = hl.import_table('sample_ids.csv', delimiter=',', key='person_id')
## Filter samples
vds_subset = hl.vds.filter_samples(vds_no_flag, sample_needed_ht, keep=True)
-
Prepare PRS Weight Table
The weight table must have these columns:
["chr", "bp", "effect_allele", "noneffect_allele", "weight"]
The table below shows an example of a PRS weight table
chr bp effect_allele noneffect_allele weight 2 202881162 C T 1.57E-01 14 996676 C T 6.77E-02 2 202881162 C T 1.57E-01 14 99667605 C T 6.77E-02 6 12903725 G A 1.13E-01 13 110308365 G A 6.77E-02
# Prepare PRS weight table using function 'prepare_prs_table'
AoUPRS.prepare_prs_table('PGS######_table.csv',
'PGS######_weight_table.csv', bucket=bucket)
# Read PRS weight table
with gcsfs.GCSFileSystem().open('PGS######_weight_table.csv', 'rb') as gcs_file:
PGS######_weights_table = pd.read_csv(gcs_file)
- Calculate PRS
# Define paths
prs_identifier = 'PGS######'
pgs_weight_path = 'PGS######_weight_table.csv'
output_path = 'PGS######'
# Calculate PRS
## MT:
AoUPRS.calculate_prs_mt(mt, prs_identifier, pgs_weight_path, output_path, bucket=None, save_found_variants=False)
## VDS:
AoUPRS.calculate_prs_vds(vds_subset, prs_identifier, pgs_weight_path, output_path, bucket=bucket, save_found_variants=True)
Example Notebooks
For detailed examples, refer to the provided Jupyter notebooks in the notebooks directoy . These notebooks demonstrate how to use the AoUPRS package to calculate PRS step-by-step.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Author
Ahmed Khattab
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file AoUPRS-0.1.1.tar.gz
.
File metadata
- Download URL: AoUPRS-0.1.1.tar.gz
- Upload date:
- Size: 8.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d7380e0241247239613bbc3187a4304145cc3c33b09a196e19a714bc6a61f8de |
|
MD5 | 5cbb9e95ce7d755243179cd7f6dffeae |
|
BLAKE2b-256 | a467572907c372c0705fac6b12c65f18b989d66a8aba2762c7ad7495f847209d |
File details
Details for the file AoUPRS-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: AoUPRS-0.1.1-py3-none-any.whl
- Upload date:
- Size: 11.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 090a2456137bec266ca703ac80a22ac77cbf6554f32d56ec244500d5b398c17b |
|
MD5 | 990f85480ae3953ce7d10393a68df5ea |
|
BLAKE2b-256 | 42662df2ea5bf95b0217a60ae5e357e0904edc804ecc6277a9af8f1e1c800f10 |