Skip to main content

Evolutionary Scale Modeling (esm): Pretrained language models for proteins. From Facebook AI Research.

Project description

Evolutionary Scale Modeling

This repository contains code and pre-trained weights for Transformer protein language models from Facebook AI Research, including our state-of-the-art ESM-1b and MSA Transformer, as well as ESM-1v for predicting variant effects and ESM-IF1 for inverse folding. Transformer protein language models were introduced in our paper, "Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences" (Rives et al., 2019).

ESM-1b outperforms all tested single-sequence protein language models across a range of structure prediction tasks. The MSA Transformer (ESM-MSA-1) can improve performance further by leveraging MSA information.

Citation
@article{rives2019biological,
  author={Rives, Alexander and Meier, Joshua and Sercu, Tom and Goyal, Siddharth and Lin, Zeming and Liu, Jason and Guo, Demi and Ott, Myle and Zitnick, C. Lawrence and Ma, Jerry and Fergus, Rob},
  title={Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences},
  year={2019},
  doi={10.1101/622803},
  url={https://www.biorxiv.org/content/10.1101/622803v4},
  journal={bioRxiv}
}
Table of contents
What's New
  • April 2022: New inverse folding model ESM-IF1 released, trained on CATH and UniRef50 predicted structures.
  • August 2021: Added flexibility to tokenizer to allow for spaces and special tokens (like <mask>) in sequence.
  • July 2021: New pre-trained model ESM-1v released, trained on UniRef90 (see Meier et al. 2021).
  • July 2021: New MSA Transformer released, with a minor fix in the row positional embeddings (ESM-MSA-1b).
  • Feb 2021: MSA Transformer added (see Rao et al. 2021). Example usage in notebook.
  • Dec 2020: Self-Attention Contacts for all pre-trained models (see Rao et al. 2020)
  • Dec 2020: Added new pre-trained model ESM-1b (see Rives et al. 2019 Appendix B)
  • Dec 2020: ESM Structural Split Dataset (see Rives et al. 2019 Appendix A.10)

Main models you should use

Shorthand esm.pretrained. Dataset Description
ESM-1b esm1b_t33_650M_UR50S() UR50 SOTA general-purpose protein language model. Can be used to predict structure, function and other protein properties directly from individual sequences. Released with Rives et al. 2019 (Dec 2020 update).
ESM-MSA-1b esm_msa1b_t12_100M_UR50S() UR50 + MSA MSA Transformer language model. Can be used to extract embeddings from an MSA. Enables SOTA inference of structure. Released with Rao et al. 2021 (ICML'21 version, June 2021).
ESM-1v esm1v_t33_650M_UR90S_1() ... esm1v_t33_650M_UR90S_5() UR90 Language model specialized for prediction of variant effects. Enables SOTA zero-shot prediction of the functional effects of sequence variations. Same architecture as ESM-1b, but trained on UniRef90. Released with Meier et al. 2021.
ESM-IF1 esm_if1_gvp4_t16_142M_UR50() CATH + UR50 Inverse folding model. Can be used to design sequences for given structures, or to predict functional effects of sequence variation for given structures. Enables SOTA fixed backbone sequence design. Released with Hsu et al. 2022.

For a complete list of available models, with details and release notes, see Pre-trained Models.

Comparison to related works

Task Unsupervised contact prediction Supervised contact prediction SSP
Test set Large valid CASP13-FM CAMEO CASP13-FM CAMEO CB513
Gremlin (Potts) 39.3 16.9 24.0 40.1 47.3
UniRep 11.2 17.8 58.4
SeqVec 13.8 22.5 62.1
TAPE 11.2 5.5 6.8 12.3 15.9 58.0
ProtBert-BFD 34.1 13.5 23.9 24.7 37.0 70.0
Prot-T5-XL-BFD 35.6 16.5 25.9 25.0 40.8 71.4 ± 0.3
ESM-1 33.7 13.6 21.4 (todo) (todo) 69.2
ESM-1b 41.1 17.0 30.9 28.2 44.4 71.6 ± 0.1
ESM-1v 35.3 14.2 24.4
ESM-MSA-1b 57.4 44.8 43.5 54.6 55.8 73.4 ± 0.3

Comparison to related protein language models on structure prediction tasks.

  • All contact numbers are the top-L,LR precision metric, where long range means sequence separation of at least 24 residues
  • For unsupervised contact prediction, a sparse linear combination of the attention heads is used to directly predict protein contacts, fitted with logistic regression on 20 structures. For more details on the method, see Rao et al. 2020.
  • Supervised contact prediction all uses the same resnet (32 layers) and trRosetta training data, cf Rao et al. 2021.
  • (SSP) Secondary structure Q8 accuracy on CB513, transformer finetuned with convolution + LSTM head.
  • Direct coupling analysis methods (Gremlin, mfDCA, Psicov) and ESM-MSA-1 use the trRosetta MSAs, while other methods predict from single sequence.

Usage

Quick Start

As a prerequisite, you must have PyTorch installed to use this repository.

You can use this one-liner for installation, using the latest release of esm:

$ pip install fair-esm  # latest release, OR:
$ pip install git+https://github.com/facebookresearch/esm.git  # bleeding edge, current repo main branch

We also support PyTorch Hub, which removes the need to clone and/or install this repository yourself:

import torch
model, alphabet = torch.hub.load("facebookresearch/esm:main", "esm1b_t33_650M_UR50S")

After pip install, you can load and use a pretrained model as follows:

import torch
import esm

# Load ESM-1b model
model, alphabet = esm.pretrained.esm1b_t33_650M_UR50S()
batch_converter = alphabet.get_batch_converter()
model.eval()  # disables dropout for deterministic results

# Prepare data (first 2 sequences from ESMStructuralSplitDataset superfamily / 4)
data = [
    ("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
    ("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
    ("protein2 with mask","KALTARQQEVFDLIRD<mask>ISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
    ("protein3",  "K A <mask> I S Q"),
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)

# Extract per-residue representations (on CPU)
with torch.no_grad():
    results = model(batch_tokens, repr_layers=[33], return_contacts=True)
token_representations = results["representations"][33]

# Generate per-sequence representations via averaging
# NOTE: token 0 is always a beginning-of-sequence token, so the first residue is token 1.
sequence_representations = []
for i, (_, seq) in enumerate(data):
    sequence_representations.append(token_representations[i, 1 : len(seq) + 1].mean(0))

# Look at the unsupervised self-attention map contact predictions
import matplotlib.pyplot as plt
for (_, seq), attention_contacts in zip(data, results["contacts"]):
    plt.matshow(attention_contacts[: len(seq), : len(seq)])
    plt.title(seq)
    plt.show()

Compute embeddings in bulk from FASTA

We provide a script that efficiently extracts embeddings in bulk from a FASTA file. A cuda device is optional and will be auto-detected. The following command extracts the final-layer embedding for a FASTA file from the ESM-1b model:

$ python scripts/extract.py esm1b_t33_650M_UR50S examples/data/some_proteins.fasta examples/data/some_proteins_emb_esm1b/ \
    --repr_layers 0 32 33 --include mean per_tok

Directory some_proteins_emb_esm1b/ now contains one .pt file per FASTA sequence; use torch.load() to load them. scripts/extract.py has flags that determine what's included in the .pt file:

  • --repr-layers (default: final only) selects which layers to include embeddings from.
  • --include specifies what embeddings to save. You can use the following:
    • per_tok includes the full sequence, with an embedding per amino acid (seq_len x hidden_dim).
    • mean includes the embeddings averaged over the full sequence, per layer.
    • bos includes the embeddings from the beginning-of-sequence token. (NOTE: Don't use with the pre-trained models - we trained without bos-token supervision)

Zero-shot variant prediction

See "examples/variant-prediction/" for code and pre-trained weights for the ESM-1v models described in Language models enable zero-shot prediction of the effects of mutations on protein function. (Meier et al. 2021).

Inverse folding

See "examples/inverse_folding/" for detailed user guide. The ESM-IF1 model is described as GVPTransformer in Learning inverse folding from millions of predicted structures. (Hsu et al. 2022).

We also provide a colab notebook for the sequence design and sequence scoring functionalities.

The ESM-IF1 inverse folding model is built for predicting protein sequences from their backbone atom coordinates. We provide scripts here 1) to sample sequence designs for a given structure and 2) to score sequences for a given structure.

Trained with 12M protein structures predicted by AlphaFold2, the ESM-IF1 model consists of invariant geometric input processing layers followed by a sequence-to-sequence transformer, and achieves 51% native sequence recovery on structurally held-out backbones with 72% recovery for buried residues. The model is also trained with span masking to tolerate missing backbone coordinates and therefore can predict sequences for partially masked structures.

Sample sequence designs for a given structure

The environment setup is described in this subsection of examples/inverse_folding.

To sample sequences for a given structure in PDB or mmCIF format, use the sample_sequences.py script. The input file can have either .pdb or .cif as suffix.

For example, to sample 3 sequence designs for the golgi casein kinase structure (PDB 5YH2; PDB Molecule of the Month from January 2022), we can run the following command from the esm root directory:

python examples/inverse_folding/sample_sequences.py examples/inverse_folding/data/5YH2.pdb \
    --chain C --temperature 1 --num-samples 3 \
    --outpath examples/inverse_folding/output/sampled_sequences.fasta

The sampled sequences will be saved in a fasta format to the specified output file.

The temperature parameter controls the sharpness of the probability distribution for sequence sampling. Higher sampling temperatures yield more diverse sequences but likely with lower native sequence recovery. The default sampling temperature is 1. To optimize for native sequence recovery, we recommend sampling with low temperature such as 1e-6.

Scoring sequences

To score the conditional log-likelihoods for sequences conditioned on a given structure, use the score_log_likelihoods.py script.

For example, to score the sequences in examples/inverse_folding/data/5YH2_mutated_seqs.fasta according to the structure in examples/inverse_folding/data/5YH2.pdb, we can run the following command from the esm root directory:

python examples/inverse_folding/score_log_likelihoods.py examples/inverse_folding/data/5YH2.pdb \
    examples/inverse_folding/data/5YH2_mutated_seqs.fasta --chain C \
    --outpath examples/inverse_folding/output/5YH2_mutated_seqs_scores.csv

The conditional log-likelihoods are saved in a csv format in the specified output path. The output values are the average log-likelihoods averaged over all amino acids in a sequence.

For more information, see "./examples/inverse_folding/" for detailed user guide.

Notebooks

Inverse folding - predicting or scoring sequences based on backbone structures

The ESM-IF1 inverse folding model predicts protein sequences from their backbone atom coordinates, trained with 12M protein structures predicted by AlphaFold2. This notetook guide you through examples of sampling sequences, calculating conditional log-likelihoods, and extracting encoder output as structure representation.

Supervised variant prediction - training a classifier on the embeddings

To help you get started with using the embeddings, this jupyter notebook tutorial shows how to train a supervised variant predictor using embeddings from ESM-1. You can adopt a similar protocol to train a model for any downstream task, even with limited data. First you can obtain the embeddings for examples/data/P62593.fasta either by downloading the precomputed embeddings as instructed in the notebook or by running the following:

# Obtain the embeddings
$ python scripts/extract.py esm1v_t33_650M_UR90S_1 examples/data/P62593.fasta examples/data/P62593_emb_esm1v/ \
    --repr_layers 33 --include mean

Then, follow the remaining instructions in the tutorial. You can also run the tutorial in a colab notebook.

Note, alternatively use the newer instructions for zero-shot variant prediction, which predicts mutational effects without any supervised training.

Unsupervised contact prediction

This jupyter notebook tutorial demonstrates contact prediction with both the ESM-1b and MSA Transformer (ESM-MSA-1) models. Contact prediction is based on a logistic regression over the model's attention maps. This methodology is based on our ICLR 2021 paper, Transformer protein language models are unsupervised structure learners. (Rao et al. 2020) The MSA Transformer (ESM-MSA-1) takes a multiple sequence alignment (MSA) as input, and uses the tied row self-attention maps in the same way. See MSA Transformer. (Rao et al. 2021).

To get unsupervised attention-based contacts, call model.predict_contacts(tokens) or model(tokens, return_contacts=True).

ESMStructuralSplitDataset and self-attention contact prediction

And this jupyter notebook tutorial shows how to load and index the ESMStructuralSplitDataset, and computes the self-attention map unsupervised contact predictions using ESM-1b.

Available Models and Datasets

Pre-trained Models

Shorthand esm.pretrained. #layers #params Dataset Embedding Dim Model URL (automatically downloaded to ~/.cache/torch/hub/checkpoints)
ESM-IF1 esm_if1_gvp4_t16_142M_UR50 20 124M CATH 4.3 + predicted structures for UR50 512 https://dl.fbaipublicfiles.com/fair-esm/models/esm_if1_gvp4_t16_142M_UR50.pt
ESM-1v esm1v_t33_650M_UR90S_[1-5] 33 650M UR90/S 2020_03 1280 https://dl.fbaipublicfiles.com/fair-esm/models/esm1v_t33_650M_UR90S_1.pt
ESM-MSA-1b esm_msa1b_t12_100M_UR50S 12 100M UR50/S + MSA 2018_03 768 https://dl.fbaipublicfiles.com/fair-esm/models/esm_msa1b_t12_100M_UR50S.pt
ESM-MSA-1 esm_msa1_t12_100M_UR50S 12 100M UR50/S + MSA 2018_03 768 https://dl.fbaipublicfiles.com/fair-esm/models/esm_msa1_t12_100M_UR50S.pt
ESM-1b esm1b_t33_650M_UR50S 33 650M UR50/S 2018_03 1280 https://dl.fbaipublicfiles.com/fair-esm/models/esm1b_t33_650M_UR50S.pt
ESM-1 esm1_t34_670M_UR50S 34 670M UR50/S 2018_03 1280 https://dl.fbaipublicfiles.com/fair-esm/models/esm1_t34_670M_UR50S.pt
esm1_t34_670M_UR50D 34 670M UR50/D 2018_03 1280 https://dl.fbaipublicfiles.com/fair-esm/models/esm1_t34_670M_UR50D.pt
esm1_t34_670M_UR100 34 670M UR100 2018_03 1280 https://dl.fbaipublicfiles.com/fair-esm/models/esm1_t34_670M_UR100.pt
esm1_t12_85M_UR50S 12 85M UR50/S 2018_03 768 https://dl.fbaipublicfiles.com/fair-esm/models/esm1_t12_85M_UR50S.pt
esm1_t6_43M_UR50S 6 43M UR50/S 2018_03 768 https://dl.fbaipublicfiles.com/fair-esm/models/esm1_t6_43M_UR50S.pt

Here is a chronological list of the released models and the paper they were introduced in:

Shorthand Release Notes
ESM-1 Released with Rives et al. 2019 (Aug 2020 update).
ESM-1b Released with Rives et al. 2019 (Dec 2020 update). See Appendix B.
ESM-MSA-1 Released with Rao et al. 2021 (Preprint v1).
ESM-MSA-1b Released with Rao et al. 2021 (ICML'21 version, June 2021).
ESM-1v Released with Meier et al. 2021.
ESM-IF1 Released with Hsu et al. 2022.

ESM Structural Split Dataset

This is a five-fold cross validation dataset of protein domain structures that can be used to measure generalization of representations across different levels of structural dissimilarity. The dataset implements structural holdouts at the family, superfamily, and fold level. The SCOPe database is used to classify domains. Independently for each level of structural hold-out, the domains are split into 5 equal sets, i.e. five sets of folds, superfamilies, or families. This ensures that for each of the five partitions, structures having the same classification do not appear in both the train and test sets. For a given classification level each structure appears in a test set once, so that in the cross validation experiment each of the structures will be evaluated exactly once.

The dataset provides 3d coordinates, distance maps, and secondary structure labels. For further details on the construction of the dataset see Rives et al. 2019 Appendix A.10.

This jupyter notebook tutorial shows how to load and index the ESMStructuralSplitDataset.

ESMStructuralSplitDataset, upon initializing, will download splits and pkl. We also provide msas for each of the domains. The data can be directly downloaded below.

Name Description URL
splits train/valid splits https://dl.fbaipublicfiles.com/fair-esm/structural-data/splits.tar.gz
pkl pkl objects containing sequence, SSP labels, distance map, and 3d coordinates https://dl.fbaipublicfiles.com/fair-esm/structural-data/pkl.tar.gz
msas a3m files containing MSA for each domain https://dl.fbaipublicfiles.com/fair-esm/structural-data/msas.tar.gz

Pre-training Dataset Split

The split files establishing which UniRef50 clusters were used as held-out evaluation set for pre-training in Rives et al. 2019 and Rao et al. 2021 can be found here:

These files only contain only the UniRef50 IDs and UniRef100 IDs corresponding to the UniRef database, 2018-03 release which is released by the UniProt Consortium under a Creative Commons Attribution (CC BY 4.0) License.

Citations

If you find the models useful in your research, we ask that you cite the relevant paper:

@article{rives2019biological,
  author={Rives, Alexander and Meier, Joshua and Sercu, Tom and Goyal, Siddharth and Lin, Zeming and Liu, Jason and Guo, Demi and Ott, Myle and Zitnick, C. Lawrence and Ma, Jerry and Fergus, Rob},
  title={Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences},
  year={2019},
  doi={10.1101/622803},
  url={https://www.biorxiv.org/content/10.1101/622803v4},
  journal={bioRxiv}
}

For the self-attention contact prediction:

@article{rao2020transformer,
  author = {Rao, Roshan M and Meier, Joshua and Sercu, Tom and Ovchinnikov, Sergey and Rives, Alexander},
  title={Transformer protein language models are unsupervised structure learners},
  year={2020},
  doi={10.1101/2020.12.15.422761},
  url={https://www.biorxiv.org/content/10.1101/2020.12.15.422761v1},
  journal={bioRxiv}
}

For the MSA Transformer:

@article{rao2021msa,
  author = {Rao, Roshan and Liu, Jason and Verkuil, Robert and Meier, Joshua and Canny, John F. and Abbeel, Pieter and Sercu, Tom and Rives, Alexander},
  title={MSA Transformer},
  year={2021},
  doi={10.1101/2021.02.12.430858},
  url={https://www.biorxiv.org/content/10.1101/2021.02.12.430858v1},
  journal={bioRxiv}
}

For variant prediction using ESM-1v:

@article{meier2021language,
  author = {Meier, Joshua and Rao, Roshan and Verkuil, Robert and Liu, Jason and Sercu, Tom and Rives, Alexander},
  title = {Language models enable zero-shot prediction of the effects of mutations on protein function},
  year={2021},
  doi={10.1101/2021.07.09.450648},
  url={https://www.biorxiv.org/content/10.1101/2021.07.09.450648v1},
  journal={bioRxiv}
}

For inverse folding using ESM-IF1:

@article{hsu2022learning,
	author = {Hsu, Chloe and Verkuil, Robert and Liu, Jason and Lin, Zeming and Hie, Brian and Sercu, Tom and Lerer, Adam and Rives, Alexander},
	title = {Learning inverse folding from millions of predicted structures},
	year = {2022},
	doi = {10.1101/2022.04.10.487779},
	url = {https://www.biorxiv.org/content/early/2022/04/10/2022.04.10.487779},
	journal = {bioRxiv}
}

Much of this code builds on the fairseq sequence modeling framework. We use fairseq internally for our protein language modeling research. We highly recommend trying it out if you'd like to pre-train protein language models from scratch.

Additionally, if you would like to use the variant prediction benchmark from Meier et al. (2021), we provide a bibtex file with citations for all data in ./examples/variant-prediction/mutation_data.bib. You can cite each paper individually, or add all citations in bulk using the LaTeX command:

\nocite{wrenbeck2017deep,klesmith2015comprehensive,haddox2018mapping,romero2015dissecting,firnberg2014comprehensive,deng2012deep,stiffler2015evolvability,jacquier2013capturing,findlay2018comprehensive,mclaughlin2012spatial,kitzman2015massively,doud2016accurate,pokusaeva2019experimental,mishra2016systematic,kelsic2016rna,melnikov2014comprehensive,brenan2016phenotypic,rockah2015systematic,wu2015functional,aakre2015evolving,qi2014quantitative,matreyek2018multiplex,bandaru2017deconstruction,roscoe2013analyses,roscoe2014systematic,mavor2016determination,chan2017correlation,melamed2013deep,starita2013activity,araya2012fundamental}

License

This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fair-esm-0.5.0.tar.gz (60.6 kB view details)

Uploaded Source

Built Distribution

fair_esm-0.5.0-py3-none-any.whl (69.6 kB view details)

Uploaded Python 3

File details

Details for the file fair-esm-0.5.0.tar.gz.

File metadata

  • Download URL: fair-esm-0.5.0.tar.gz
  • Upload date:
  • Size: 60.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/51.1.1 requests-toolbelt/0.9.1 tqdm/4.55.0 CPython/3.8.5

File hashes

Hashes for fair-esm-0.5.0.tar.gz
Algorithm Hash digest
SHA256 74a287303f8f04ea89e5dcb2a4be1f1a3d85a183a6bb7f60a3225bee381102c1
MD5 aee4e966dc61dd28c4a20eaf0ef9d829
BLAKE2b-256 38951ace062224aa8a28b5bd315fb22ad5cb32fbff42439e897d2811d81a3d6d

See more details on using hashes here.

File details

Details for the file fair_esm-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: fair_esm-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 69.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/51.1.1 requests-toolbelt/0.9.1 tqdm/4.55.0 CPython/3.8.5

File hashes

Hashes for fair_esm-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b016548e1cedbc57494a0db7d91ac2bb568058e0528adebe9e6587697424967a
MD5 87ab8d2f488c6f5c771ba22498f77a65
BLAKE2b-256 b21b2fee28e7d550779bb3bb53b2ebf8ee3dbe39a42ae8d05511f834b24ddd5a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page