Skip to main content

A pipeline for protein embedding generation and visualization

Project description

Bio Embeddings

Resources to learn about bio_embeddings:

Project aims:

  • Facilitate the use of language model based biological sequence representations for transfer-learning by providing a single, consistent interface and close-to-zero-friction
  • Reproducible workflows
  • Depth of representation (different models from different labs trained on different dataset for different purposes)
  • Extensive examples, handle complexity for users (e.g. CUDA OOM abstraction) and well documented warnings and error messages.

The project includes:

  • General purpose python embedders based on open models trained on biological sequence representations (SeqVec, ProtTrans, UniRep,...)
  • A pipeline which:
    • embeds sequences into matrix-representations (per-amino-acid) or vector-representations (per-sequence) that can be used to train learning models or for analytical purposes
    • projects per-sequence embedidngs into lower dimensional representations using UMAP or t-SNE (for lightwieght data handling and visualizations)
    • visualizes low dimensional sets of per-sequence embeddings onto 2D and 3D interactive plots (with and without annotations)
    • extracts annotations from per-sequence and per-amino-acid embeddings using supervised (when available) and unsupervised approaches (e.g. by network analysis)
  • A webserver that wraps the pipeline into a distributed API for scalable and consistent workfolws

Installation

You can install bio_embeddings via pip or use it via docker. Mind the additional dependencies for align.

Pip

Install the pipeline and all extras like so:

pip install bio-embeddings[all]

To install the unstable version, please install the pipeline like so:

pip install -U "bio-embeddings[all] @ git+https://github.com/sacdallago/bio_embeddings.git"

If you only need to run a specific model (e.g. an ESM or ProtTrans model) you can install bio-embeddings without dependencies and then install the model-specific dependency, e.g.:

pip install bio-embeddings
pip install bio-embeddings[prottrans]

The extras are:

  • seqvec
  • prottrans
    • prottrans_albert_bfd
    • prottrans_bert_bfd
    • prottrans_t5_bfd
    • prottrans_t5_uniref50
    • prottrans_t5_xl_u50
    • prottrans_xlnet_uniref100
  • esm
    • esm
    • esm1b
    • esm1v
  • unirep
  • cpcprot
  • plus
  • bepler
  • deepblast

Docker

We provide a docker image at ghcr.io/bioembeddings/bio_embeddings. Simple usage example:

docker run --rm --gpus all \
    -v "$(pwd)/examples/docker":/mnt \
    -v bio_embeddings_weights_cache:/root/.cache/bio_embeddings \
    -u $(id -u ${USER}):$(id -g ${USER}) \
    ghcr.io/bioembeddings/bio_embeddings:v0.1.6 /mnt/config.yml

See the docker example in the examples folder for instructions. You can also use ghcr.io/bioembeddings/bio_embeddings:latest which is built from the latest commit.

Dependencies

To use the mmseqs_search protocol, or the mmsesq2 functions in align, you additionally need to have mmseqs2 in your path.

Installation notes

bio_embeddings was developed for unix machines with GPU capabilities and CUDA installed. If your setup diverges from this, you may encounter some inconsistencies (e.g. speed is significantly affected by the absence of a GPU and CUDA). For Windows users, we strongly recommend the use of Windows Subsystem for Linux.

What model is right for you?

Each models has its strengths and weaknesses (speed, specificity, memory footprint...). There isn't a "one-fits-all" and we encourage you to at least try two different models when attempting a new exploratory project.

The models prottrans_t5_xl_u50, esm1b, esm, prottrans_bert_bfd, prottrans_albert_bfd, seqvec and prottrans_xlnet_uniref100 were all trained with the goal of systematic predictions. From this pool, we believe the optimal model to be prottrans_t5_xl_u50, followed by esm1b.

Usage and examples

We highly recommend you to check out the examples folder for pipeline examples, and the notebooks folder for post-processing pipeline runs and general purpose use of the embedders.

After having installed the package, you can:

  1. Use the pipeline like:

    bio_embeddings config.yml
    

    A blueprint of the configuration file, and an example setup can be found in the examples directory of this repository.

  2. Use the general purpose embedder objects via python, e.g.:

    from bio_embeddings.embed import SeqVecEmbedder
    
    embedder = SeqVecEmbedder()
    
    embedding = embedder.embed("SEQVENCE")
    

    More examples can be found in the notebooks folder of this repository.

Cite

If you use bio_embeddings for your research, we would appreciate it if you could cite the following paper:

Dallago, C., Schütze, K., Heinzinger, M., Olenyi, T., Littmann, M., Lu, A. X., Yang, K. K., Min, S., Yoon, S., Morton, J. T., & Rost, B. (2021). Learned embeddings from deep learning to visualize and predict protein sets. Current Protocols, 1, e113. doi: 10.1002/cpz1.113

The corresponding bibtex:

@article{https://doi.org/10.1002/cpz1.113,
author = {Dallago, Christian and Schütze, Konstantin and Heinzinger, Michael and Olenyi, Tobias and Littmann, Maria and Lu, Amy X. and Yang, Kevin K. and Min, Seonwoo and Yoon, Sungroh and Morton, James T. and Rost, Burkhard},
title = {Learned Embeddings from Deep Learning to Visualize and Predict Protein Sets},
journal = {Current Protocols},
volume = {1},
number = {5},
pages = {e113},
keywords = {deep learning embeddings, machine learning, protein annotation pipeline, protein representations, protein visualization},
doi = {https://doi.org/10.1002/cpz1.113},
url = {https://currentprotocols.onlinelibrary.wiley.com/doi/abs/10.1002/cpz1.113},
eprint = {https://currentprotocols.onlinelibrary.wiley.com/doi/pdf/10.1002/cpz1.113},
year = {2021}
}

Additionally, we invite you to cite the work from others that was collected in `bio_embeddings` (see section _"Tools by category"_ below). We are working on an enhanced user guide which will include proper references to all citable work collected in `bio_embeddings`.

Contributors

  • Christian Dallago (lead)
  • Konstantin Schütze
  • Tobias Olenyi
  • Michael Heinzinger

Want to add your own model? See contributing for instructions.

Non-exhaustive list of tools available (see following section for more details):

Datasets

  • prottrans_t5_xl_u50 residue and sequence embeddings of the Human proteome at full precision + secondary structure predictions + sub-cellular localisation predictions: DOI
  • prottrans_t5_xl_u50 residue and sequence embeddings of the Fly proteome at full precision + secondary structure predictions + sub-cellular localisation predictions + conservation prediction + variation prediction: DOI

Tools by category

Pipeline
General purpose embedders

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bio_embeddings_duongttr-0.2.3.tar.gz (83.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bio_embeddings_duongttr-0.2.3-py3-none-any.whl (120.3 kB view details)

Uploaded Python 3

File details

Details for the file bio_embeddings_duongttr-0.2.3.tar.gz.

File metadata

  • Download URL: bio_embeddings_duongttr-0.2.3.tar.gz
  • Upload date:
  • Size: 83.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.8.20

File hashes

Hashes for bio_embeddings_duongttr-0.2.3.tar.gz
Algorithm Hash digest
SHA256 61c92a6c88580d4b4a615d7fb7681d466790e8c4eb122652335f61707fd31726
MD5 496fe763b69459587c9c79e5a5ed576b
BLAKE2b-256 713e8e3482ea8afce1c565f5a7a87cbc8eba5ae2291bb1264d7cfb35b6a6a2df

See more details on using hashes here.

File details

Details for the file bio_embeddings_duongttr-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for bio_embeddings_duongttr-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a5fa3e3c84f9fb1f039e2daf4cc1422a268c74c69c43a682bca64415c2c00d3b
MD5 e72970ffd46d940e0f456aa3dac9a586
BLAKE2b-256 cd4e25bec289eaf15f668a81cef2ac6d48c2ff3646d0c2bfbebc22e46f03a60d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page