Skip to main content

A pipeline for protein embedding generation and visualization

Project description

Bio Embeddings

Project aims:

  • Facilitate the use of DeepLearning based biological sequence representations for transfer-learning by providing a single, consistent interface and close-to-zero-friction
  • Reproducible workflows
  • Depth of representation (different models from different labs trained on different dataset for different purposes)
  • Extensive examples, handle complexity for users (e.g. CUDA OOM abstraction) and well documented warnings and error messages.

The project includes:

  • General purpose python embedders based on open models trained on biological sequence representations (SeqVec, ProtTrans, UniRep,...)
  • A pipeline which:
    • embeds sequences into matrix-representations (per-amino-acid) or vector-representations (per-sequence) that can be used to train learning models or for analytical purposes
    • projects per-sequence embedidngs into lower dimensional representations using UMAP or t-SNE (for lightwieght data handling and visualizations)
    • visualizes low dimensional sets of per-sequence embeddings onto 2D and 3D interactive plots (with and without annotations)
    • extracts annotations from per-sequence and per-amino-acid embeddings using supervised (when available) and unsupervised approaches (e.g. by network analysis)
  • A webserver that wraps the pipeline into a distributed API for scalable and consistent workfolws

We presented the bio_embeddings pipeline as a talk at ISMB 2020. You can find the talk on YouTube, and the poster on F1000.

Installation

You can install bio_embeddings via pip or use it via docker.

Pip

Install the pipeline like so:

pip install bio-embeddings[all]

To get the latest features, please install the pipeline like so:

pip install -U "bio-embeddings[all] @ git+https://github.com/sacdallago/bio_embeddings.git"

Docker

We provide a docker image at rostlab/bio_embeddings. Simple usage example:

docker run --rm --gpus all \
    -v "$(pwd)/examples/docker":/mnt \
    -u $(id -u ${USER}):$(id -g ${USER}) \
    rostlab/bio_embeddings /mnt/config.yml

See the docker example in the examples folder for instructions. We currently have published rostlab/bio_embeddings:develop. For our next stable release, we will publish tags for all releases and a latest tag pointing to the latest release.

Installation notes:

bio_embeddings was developed for unix machines with GPU capabilities and CUDA installed. If your setup diverges from this, you may encounter some inconsitencies (e.g. speed is significantly affected by the absence of a GPU and CUDA). For Windows users, we strongly recommend the use of Windows Subsystem for Linux.

What model is right for you?

Each models has its strengths and weaknesses (speed, specificity, memory footprint...). There isn't a "one-fits-all" and we encourage you to at least try two different models when attempting a new exploratory project.

The models prottrans_bert_bfd, prottrans_albert_bfd, seqvec and prottrans_xlnet_uniref100 were all trained with the goal of systematic predictions. From this pool, we believe the optimal model to be prottrans_bert_bfd, followed by seqvec, which has been established for longer and uses a different principle (LSTM vs Transformer).

Usage and examples

We highly recommend you to check out the examples folder for pipeline examples, and the notebooks folder for post-processing pipeline runs and general purpose use of the embedders.

After having installed the package, you can:

  1. Use the pipeline like:

    bio_embeddings config.yml
    

    A blueprint of the configuration file, and an example setup can be found in the examples directory of this repository.

  2. Use the general purpose embedder objects via python, e.g.:

    from bio_embeddings.embed import SeqVecEmbedder
    
    embedder = SeqVecEmbedder()
    
    embedding = embedder.embed("SEQVENCE")
    

    More examples can be found in the notebooks folder of this repository.

Cite

While we are working on a proper publication, if you are already using this tool, we would appreciate if you could cite the following poster:

Dallago C, Schütze K, Heinzinger M et al. bio_embeddings: python pipeline for fast visualization of protein features extracted by language models [version 1; not peer reviewed]. F1000Research 2020, 9(ISCB Comm J):876 (poster) (doi: 10.7490/f1000research.1118163.1)

Contributors

  • Christian Dallago (lead)
  • Konstantin Schütze
  • Tobias Olenyi
  • Michael Heinzinger

Development status

Pipeline stages
Web server (unpublished)
  • SeqVec supervised predictions
  • Bert supervised predictions
  • SeqVec unsupervised predictions for GO: CC, BP,..
  • Bert unsupervised predictions for GO: CC, BP,..
  • SeqVec unsupervised predictions for SwissProt (just a link to the 1st-k-nn)
  • Bert unsupervised predictions for SwissProt (just a link to the 1st-k-nn)
General purpose embedders

Building a Distribution

Building the packages best happens using invoke. If you manage your dependencies with poetry this should be already installed. Simply use poetry run invoke clean build to update your requirements according to your current status and to generate the dist files

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bio_embeddings-0.1.4.tar.gz (61.8 kB view details)

Uploaded Source

Built Distribution

bio_embeddings-0.1.4-py3-none-any.whl (84.3 kB view details)

Uploaded Python 3

File details

Details for the file bio_embeddings-0.1.4.tar.gz.

File metadata

  • Download URL: bio_embeddings-0.1.4.tar.gz
  • Upload date:
  • Size: 61.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.0.10 CPython/3.8.5 Darwin/19.6.0

File hashes

Hashes for bio_embeddings-0.1.4.tar.gz
Algorithm Hash digest
SHA256 9b435d15208d8bf9d35589b2e6cddf39045bb98842dd9660a3081603e25e57b3
MD5 c2e1fadb313f575088523322bfb5fa65
BLAKE2b-256 16e208a1b9e0764affebc1e4479f019d8f89d3568295f38e7e7e3c0b0ddcf5d8

See more details on using hashes here.

File details

Details for the file bio_embeddings-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: bio_embeddings-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 84.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.0.10 CPython/3.8.5 Darwin/19.6.0

File hashes

Hashes for bio_embeddings-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 64fb4657678de292bc973833080c2d84c61a223f875bf90ff2c2707763adf576
MD5 caa6882f66cff1705ba097e7c8570c67
BLAKE2b-256 b709f27d4d89e483f35c43f7b67569e11481096188a2518299a1e4737c0fe9ba

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page