Skip to main content

Package for working with GWAS summary statistics

Project description

Documentation Status Python 3.7 PyPI version License: MIT Build Status

Patch notes

13-05-2020 (v0.3.1)
  • Fixed an issue where reading data would fail when values in n, bp, chr columns were NA. An attempt is now made to impute these values. If too many are missing a ValueError is thrown.
12-05-2020 (v0.3)
  • Added fig and ax arguments to pysumstats.plot.qqplot and pysumstats.plot.manhattan to enable plotting to existing figure and axis.
  • Added pysumstats.plot.pzplot, to visually compare Z-values from B/SE to Z-values calculated from the P-value.
  • Added pysumstats.plot.afplot, to plot allele frequency differences between summary statistics.
  • Added pysumstats.plot.zzplot, to plot differences in Z-values between summary statistics.
  • Added qqplot, manhattan, pzplot, afplot, zzplot functions to MergedSumStats object.
  • Added pzplot function to SumStats object.
  • Added plot_all functions to SumStats and MergedSumStats objects to automatically generate all possible plots for the object.
11-05-2020 (v0.2.3)
  • Added return statement to MergedSumStats.merge() when inplace=False and merging with other MergedSumstats.
  • Added docstrings to base, mergedsumstats, sumstats and utils.
  • Added docs
  • Fixed import errors and added manhattan and qq function to SumStats class
08-05-2020 (v0.2)
08-05-2020 (v0.1)
  • Adapted to be a package rather then a module.
  • Added low_ram argument to SumStats to read/write data to disk rather than RAM, in case of memory issues.

Description

A python package for working with GWAS summary statistics data in Python.
This package is designed to make it easy to read summary statistics, perform QC, merge summary statistics and perform meta-analysis.
Meta-analysis can be performed with .meta() with inverse-variance weighted or samplesize-weighted methods.
GWAMA as described in Baselmans, et al. (2019) can be performed using the .gwama() function in merged summary statistics.
The plotting package uses matplotlib.pyplot for generating figures, so the functions are generally compatible with matplotlib.pyplot colors, and Figure and Axis objects.
Warning: merging with low_memory enabled is still highly experimental.

Reference

Using the pysumstats package for a publication, or something similar? That is awesome!
There is no publication attached to this package, and I am not going to force anyone to reference me or make me a co-author or whatever, I want this to remain easily accessible. But I would greatly appreciate it if you add a link to this github, or a reference to it in the acknowledgements or something like that.
If you have any questions, want to help add methods or want to let me know you are planning a publication with this, you can get in touch via the pypi website of this project.

Installation

This package was made for Python 3.7. Clone the package directly from this github, or install with

pip3 install pysumstats

Usage

import pysumstats as sumstats

Reading files

s1 = sumstats.SumStats("sumstats1.csv.gz", phenotype='GWASsummary1')

Reading data without sample size column: you will manually have to specify gwas sample size

s2 = sumstats.SumStats("sumstats2.txt.gz", phenotype='GWASsummary2', gwas_n=350492)

Reading data with column names not automatically recognized:
s3 = sumstats.SumStats("sumstats3.csv", phenotype='GWASsummary3',
                              column_names={
                                    'rsid': 'weird_name_for_rsid',
                                    'chr': 'weird_name_for_chr',
                                    'bp': 'weird_name_for_bp',
                                    'ea': 'weird_name_for_ea',
                                    'oa': 'weird_name_for_oa',
                                    'maf': 'weird_name_for_maf',
                                    'b': 'weird_name_for_b',
                                    'se': 'weird_name_for_se',
                                    'p': 'weird_name_for_p',
                                    'hwe': 'weird_name_for_p_hwe',
                                    'info': 'weird_name_for_info',
                                    'n': 'weird_name_for_n',
                                    'eaf': 'weird_name_for_eaf',
                                    'oaf': 'weird_name_for_oaf'})
Performing qc
s1.qc(maf=.01)
s2.qc(maf=.01, hwe=1e-6, info=.9)
s3.qc()  # MAF .01 is the default
Merging sumstats, low_memory option is still experimental so be carefull with that

merge1 = s1.merge(s2)

Meta analysis
n_weighted_meta = merge1.meta_analyze(name='meta1', method='samplesize')  # N-weighted meta analysis
ivw_meta = merge1.meta_analyze(name='meta1', method='ivw')  # Standard inverse-variance weighted meta analysis
gwama = merge1.gwama(name='meta1', method='ivw')  # GWAMA as described in Baselmans, et al. (2019)
Additionally supports adding SNP heritabilities as weights

exc_meta = exc.gwama(h2_snp={'ntr_exc': .01, 'ukb_ssoe': .02}, name='exc', method='ivw')

And your own covariance matrix (called cov_Z in most R scripts)
# Either read it from a file:
import pandas as pd
cov_z = pd.read_csv('my_cov_z.csv') # Note it should be pandas dataframe with column names and index names equal to your phenotypes

# Or generate it from a phenotype file yourself:
phenotypes = pd.read_csv('my_phenotype_file.csv')
cov_z = sumstats.cov_matrix_from_phenotype_file(phenotypes, phenotypes=['GWASsummary1', 'GWASsummary2'])

gwama = exc.gwama(cov_matrix=cov_z, h2_snp={'GWASsummary1': .01, 'GWASsummary2': .02}, name='meta1', method='ivw')
See a summary of the result

gwama.describe()

See head of the data

gwama.head()

See head of all chromosomes

gwama.head(n_chromosomes=23)

QQ and Manhattan plots of the result
gwama.manhattan(filename='meta_manhattan.png')
gwama.qqplot(filename='meta_qq.png')
Save the result as csv

exc.save('exc_sumstats.csv')

Save the result as a pickle file (way faster to save and load back into Python)

exc.save('exc_sumstats.pickle')

Merge gwama results with another file:

merged = gwama.merge(s3)

Save prepped files for MR analysis in R:
merged.prep_for_mr(exposure='GWASsummary3', outcome='meta1',
                   filename=['GWAS3-Meta.csv', 'Meta-GWAS3.csv'],
                   p_cutoff=5e-8, bidirectional=True, index=False)

The resulting files will have the following column names, per specification of the MendelianRandomization package in R:

rsid chr bp exposure.A1 exposure.A2 outcome.A1 outcome.A2 exposure.se exposure.b outcome.se outcome.b

Some other stuff:
# See column names of the file
gpc_neuro.columns

# SumStats support for standard indexing is growing:
exc[0]  # Get the full output of the first SNP
exc[:10]  # Get the full output of the first 10 SNPs
exc[:10, 'p']  # Get the p value of the first 10 SNPs
exc['p']  # Get the p values of all SNPs
exc['rs78948828']  # Get the full output of 1 specific rsid
exc[['rs78948828', 'rs6057089', 'rs55957973']]  # Get the full output of multiple specific rsids
exc[['rs78948828', 'rs6057089', 'rs55957973'], 'p']  # Get the p-value for specific rsids

# If for whatever reason you want to do stuff with each SNP individually you can also loop over the entire file
for snp_output in exc:
    if exc['p'] < 5e-8:
        print('Yay significant SNP!')
    # do something


# If you only want to loop over some specific columns, you can
for rsid, b, se, p in exc[['rsid', 'b', 'se', 'p']].values:
    if p < 5e-8:
        print('Yay significant SNP!')


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysumstats-0.3.1.tar.gz (26.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pysumstats-0.3.1-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file pysumstats-0.3.1.tar.gz.

File metadata

  • Download URL: pysumstats-0.3.1.tar.gz
  • Upload date:
  • Size: 26.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.1.3.post20200330 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.7

File hashes

Hashes for pysumstats-0.3.1.tar.gz
Algorithm Hash digest
SHA256 aa50b63103de0f0d6c9329586480b160f520fb3fcdefbe51122926083320b147
MD5 733eca2da7a7bc90127ff8a67decd407
BLAKE2b-256 7a28f25a843c4b27ce78575927d5335cf7023cd4707702bec8432331dad547eb

See more details on using hashes here.

File details

Details for the file pysumstats-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: pysumstats-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.1.3.post20200330 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.7.7

File hashes

Hashes for pysumstats-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 966cb49ecbfdeb0dacb8f274520d61e0a227dcb3bb921526387eaf5a342e1266
MD5 21e2ce30c214792a8d3f0d5a372fecd9
BLAKE2b-256 68e7e70630b695c8f188ce8777a50cf0dcc333322379746795302674620c9d69

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page