Skip to main content

ProteinNPT: Improving Protein Property Prediction and Design with Non-Parametric Transformers

Project description

ProteinNPT

This is the official code repository for the paper "ProteinNPT: Improving Protein Property Prediction and Design with Non-Parametric Transformers"

Overview

ProteinNPT is a semi-supervised conditional pseudo-generative model for protein property prediction and design. It is a variant of Non-Parametric Transformers which learns a joint representation of full input batches of protein sequences and associated property labels. It can be used to predict single or multiple protein properties, generate novel sequences via conditional sampling and support iterative protein redesign cycles via Bayesian optimization.

Setup

Step 1: Download the ProteinNPT data files

curl -o ProteinNPT_data.zip https://marks.hms.harvard.edu/ProteinNPT/ProteinNPT_data.zip
unzip ProteinNPT_data.zip && rm ProteinNPT_data.zip

We recommend saving this file in a location where disk space is not a concern. The above file will use 8.2GB when unzipped, but much more space will be needed as you download pre-trained model checkpoints (~10GB) and save sequence embeddings in subsequent steps (~1TB for all MSA Transformer embeddings).

Step 2: Configure environment

Edit lines 2 & 3 of the setup.sh bash script under the scripts folder with the location of 1) the unzipped file downloaded in step 1 (the data_path) and 2) your local copy of this GitHub repository (the repo_path). Then run the setup.sh script. This will sequentially:

  • Create the conda environment and setup the repository locally
  • Download the model checkpoints to the relevant location in the data_path

Step 3: Edit config file

Edit lines 2 & 3 of the config.sh bash script with the data_path and repo_path (identical to lines 2 & 3 from setup.sh)

Usage

Step 1: Extract sequence embeddings (optional)

Run embeddings_subs.sh (or embeddings_indels.sh) to create sequence embeddings with the pretrained protein language model of interest, for the desired DMS assays. This step is optional (embeddings are computed on-the-fly otherwise) and will require sufficient disk space to save pre-computed embeddings, but it will significantly reduce run time and memory requirements during training (especially for ProteinNPT).

Step 2: Compute zero-shot fitness predictions (optional)

Run zero_shot_fitness_subs.sh (or zero_shot_fitness_indels.sh) to compute zero-shot fitness predictions with the relevant pretrained protein models.

Adjust the following variables as needed:

  • assay_index (index of desired DMS assay in the ProteinGym reference_file under utils/proteingym)
  • model_type (name of the pre-trained model with which to compute embeddings)
  • model_location (location of the pre-trained model embeddings -- you may use the relevant variables defined in config.sh for convenience)

Note that:

  1. We provide all zero-shot predictions for ProteinGym DMS assays in ProteinNPT_data.zip and thus you do not need to recompute these if interested in these same assays
  2. We have found that leveraging zero-shot fitness predictions as additional covariate or auxiliary label generally helps performance, especially when extrapolating to positions not seen during training. However, these zero-shot predictions are not strictly required for ProteinNPT or the various baselines to run, and may be less relevant for predicting properties that differ from fitness.

Step 3: Train ProteinNPT models (or baselines)

Run train_subs.sh (or train_indels.sh) to train the desired ProteinNPT or baseline models.

Adjust the following variables as needed:

  • assay_index (index of desired DMS assay in the ProteinGym reference_file under utils/proteingym)
  • model_config_location (config file for ProteinNPT or baseline model -- you may use the relevant variables defined in config.sh for convenience)
  • sequence_embeddings_folder (location of saved sequence embeddings on disk -- you may use the relevant variables defined in config.sh for convenience)
  • fold_variable_name (type of cross-validation scheme to be used for training -- to be chosen within fold_random_5, fold_contiguous_5, or fold_modulo_5)

We also provide an example script to train a ProteiNPT or baseline model to predict several properties simultaneously in train_multi_objectives.sh.

License

This project is available under the MIT license found in the LICENSE file in this GitHub repository.

Acknowledgements

The utils in this codebase leverage code from:

References

If you use this codebase, please cite the following paper:

@article {Notin2023.12.06.570473,
	author = {Pascal Notin and Ruben Weitzman and Debora S Marks and Yarin Gal},
	title = {ProteinNPT: Improving Protein Property Prediction and Design with Non-Parametric Transformers},
	elocation-id = {2023.12.06.570473},
	year = {2023},
	doi = {10.1101/2023.12.06.570473},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2023/12/07/2023.12.06.570473},
	eprint = {https://www.biorxiv.org/content/early/2023/12/07/2023.12.06.570473.full.pdf},
	journal = {bioRxiv}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

proteinnpt-1.0.tar.gz (4.1 kB view details)

Uploaded Source

Built Distribution

proteinnpt-1.0-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file proteinnpt-1.0.tar.gz.

File metadata

  • Download URL: proteinnpt-1.0.tar.gz
  • Upload date:
  • Size: 4.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.12

File hashes

Hashes for proteinnpt-1.0.tar.gz
Algorithm Hash digest
SHA256 7944fca1b216f7a2925ddc4ec78547e7fbd23e5db4728725bc929449c7540daa
MD5 803dc0363e357322cf84b1e69afa769a
BLAKE2b-256 9d020f6da6939e1a117e281db391fcd9b9131e8440b5d60665b40c36ab239217

See more details on using hashes here.

File details

Details for the file proteinnpt-1.0-py3-none-any.whl.

File metadata

  • Download URL: proteinnpt-1.0-py3-none-any.whl
  • Upload date:
  • Size: 4.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.12

File hashes

Hashes for proteinnpt-1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6347c1f18a5fdac5e15473aa43f10ca99a3f0b87d81224262025e16ac8a9b313
MD5 cb5344e508fae6636d102db4f20ceae9
BLAKE2b-256 d11d35d7b5ab1742d968e14bc470c9f43632a5d34ba97461e5705dece72f0710

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page