DeepRank2 is an open-source deep learning framework for data mining of protein-protein interfaces or single-residue missense variants.
Project description
DeepRank2
Badges | |
---|---|
fairness | |
package | |
docs | |
tests | |
running on | |
license |
Overview
DeepRank2 is an open-source deep learning (DL) framework for data mining of protein-protein interfaces (PPIs) or single-residue variants (SRVs). This package is an improved and unified version of three previously developed packages: DeepRank, DeepRank-GNN, and DeepRank-Mut.
DeepRank2 allows for transformation of (pdb formatted) molecular data into 3D representations (either grids or graphs) containing structural and physico-chemical information, which can be used for training neural networks. DeepRank2 also offers a pre-implemented training pipeline, using either CNNs (for grids) or GNNs (for graphs), as well as output exporters for evaluating performances.
Main features:
- Predefined atom-level and residue-level feature types
- e.g. atom/residue type, charge, size, potential energy
- All features' documentation is available here
- Predefined target types
- binary class, CAPRI categories, DockQ, RMSD, and FNAT
- Detailed docking scores documentation is available here
- Flexible definition of both new features and targets
- Features generation for both graphs and grids
- Efficient data storage in HDF5 format
- Support for both classification and regression (based on PyTorch and PyTorch Geometric)
Table of contents
- DeepRank2
Installation
There are two ways to install DeepRank2:
- In a dockerized container. This allows you to use DeepRank2, including all the notebooks within the container (a protected virtual space), without worrying about your operating system or installation of dependencies.
- We recommend this installation for inexperienced users and to learn to use or test our software, e.g. using the provided tutorials. However, resources might be limited in this installation and we would not recommend using it for large datasets or on high-performance computing facilities.
- Local installation on your system. This allows you to use the full potential of DeepRank2, but requires a few additional steps during installation.
- We recommend this installation for more experienced users, for larger projects, and for (potential) contributors to the codebase.
Containerized Installation
In order to try out the package without worrying about your OS and without the need of installing all the required dependencies, we created a Dockerfile
that can be used for taking care of everything in a suitable container.
For this, you first need to install Docker on your system. Then run the following commands. You may need to have sudo permission for some steps, in which case the commands below can be preceded by sudo
:
# Clone the DeepRank2 repository and enter its root directory
git clone https://github.com/DeepRank/deeprank2
cd deeprank2
# Build and run the Docker image
docker build -t deeprank2 .
docker run -p 8888:8888 deeprank2
Next, open a browser and go to http://localhost:8888
to access the application running inside the Docker container. From there you can use DeepRank2, e.g. to run the tutorial notebooks.
More details about the tutorials' contents can be found here. Note that in the docker container only the raw PDB files are downloaded, which needed as a starting point for the tutorials. You can obtain the processed HDF5 files by running the data_generation_xxx.ipynb
notebooks. Because Docker containers are limited in memory resources, we limit the number of data points processed in the tutorials. Please install the package locally to fully leverage its capabilities.
If after running the tutorials you want to remove the (quite large) Docker image from your machine, you must first stop the container and can then remove the image. More general information about Docker can be found on the official website docs.
Local/remote installation
Local installation is formally only supported on the latest stable release of ubuntu, for which widespread automated testing through continuous integration workflows has been set up. However, it is likely that the package runs smoothly on other operating systems as well.
Before installing DeepRank2 please ensure you have GCC installed: if running gcc --version
gives an error, run sudo apt-get install gcc
.
YML file installation
You can use the provided YML file for creating a conda environment containing the latest stable release of DeepRank2 and all its dependencies. This will install the CPU-only version of DeepRank2 on Python 3.10. Note that this will not work for MacOS. Do the Manual Installation instead.
# Clone the DeepRank2 repository and enter its root directory
git clone https://github.com/DeepRank/deeprank2
cd deeprank2
# Ensure you are in your base environment
conda activate
# Create the environment
conda env create -f env/environment.yml
# Activate the environment
conda activate deeprank2
See instructions below to test that the installation was succesful.
Manual installation
If you want to use the GPUs, choose a specific python version, are a MacOS user, or if the YML installation was not succesful, you can install the package manually. We advise to do this inside a conda virtual environment. If you have any issues during installation of dependencies, please refer to the official documentation for each package (linked below), as our instructions may be out of date (last tested on 19 Jan 2024):
- DSSP 4:
conda install -c sbl dssp
- MSMS:
conda install -c bioconda msms
- Here for MacOS with M1 chip users.
- PyTorch:
conda install pytorch torchvision torchaudio cpuonly -c pytorch
- Pytorch regularly publishes updates and not all newest versions will work stably with DeepRank2. Currently, the package is tested using PyTorch 2.1.1.
- We support torch's CPU library as well as CUDA.
- PyG and its optional dependencies:
torch_scatter
,torch_sparse
,torch_cluster
,torch_spline_conv
.- The exact command to install pyg will depend on the version of pytorch you are using. Please refer to the source's installation instructions (we recommend using the pip installation for this as it also shows the command for the dependencies).
- For MacOS with M1 chip users: install the conda version of PyTables.
Finally install deeprank2 itself: pip install deeprank2
.
Alternatively, get the latest updates by cloning the repo and installing the editable version of the package with:
git clone https://github.com/DeepRank/deeprank2
cd deeprank2
pip install -e .'[test]'
The test
extra is optional, and can be used to install test-related dependencies, useful during development.
Testing DeepRank2 installation
You can check that all components were installed correctly, using pytest. We especially recommend doing this in case you installed DeepRank2 and its dependencies manually (the latter option above).
The quick test should be sufficient to ensure that the software works, while the full test (a few minutes) will cover a much broader range of settings to ensure everything is correct.
First run pip install pytest
, if you did not install it above. Then run pytest tests/test_integration.py
for the quick test or just pytest
for the full test (expect a few minutes to run).
Contributing
If you would like to contribute to the package in any way, please see our guidelines.
Using DeepRank2
The following section serves as a first guide to start using the package, using protein-protein Interface (PPI) queries as example. For an enhanced learning experience, we provide in-depth tutorial notebooks for generating PPI data, generating SVR data, and for the training pipeline. For more details, see the extended documentation.
Data generation
For each protein-protein complex (or protein structure containing a missense variant), a Query
can be created and added to the QueryCollection
object, to be processed later on. Two subtypes of Query
exist: ProteinProteinInterfaceQuery
and SingleResidueVariantQuery
.
A Query
takes as inputs:
- a
.pdb
file, representing the protein-protein structure, - the resolution (
"residue"
or"atom"
), i.e. whether each node should represent an amino acid residue or an atom, - the ids of the chains composing the structure, and
- optionally, the correspondent position-specific scoring matrices (PSSMs), in the form of
.pssm
files.
from deeprank2.query import QueryCollection, ProteinProteinInterfaceQuery
queries = QueryCollection()
# Append data points
queries.add(ProteinProteinInterfaceQuery(
pdb_path = "tests/data/pdb/1ATN/1ATN_1w.pdb",
resolution = "residue",
chain_ids = ["A", "B"],
targets = {
"binary": 0
},
pssm_paths = {
"A": "tests/data/pssm/1ATN/1ATN.A.pdb.pssm",
"B": "tests/data/pssm/1ATN/1ATN.B.pdb.pssm"
}
))
queries.add(ProteinProteinInterfaceQuery(
pdb_path = "tests/data/pdb/1ATN/1ATN_2w.pdb",
resolution = "residue",
chain_ids = ["A", "B"],
targets = {
"binary": 1
},
pssm_paths = {
"A": "tests/data/pssm/1ATN/1ATN.A.pdb.pssm",
"B": "tests/data/pssm/1ATN/1ATN.B.pdb.pssm"
}
))
queries.add(ProteinProteinInterfaceQuery(
pdb_path = "tests/data/pdb/1ATN/1ATN_3w.pdb",
resolution = "residue",
chain_ids = ["A", "B"],
targets = {
"binary": 0
},
pssm_paths = {
"A": "tests/data/pssm/1ATN/1ATN.A.pdb.pssm",
"B": "tests/data/pssm/1ATN/1ATN.B.pdb.pssm"
}
))
The user is free to implement a custom query class. Each implementation requires the build
method to be present.
The queries can then be processed into graphs only or both graphs and 3D grids, depending on which kind of network will be used later for training.
from deeprank2.features import components, conservation, contact, exposure, irc, surfacearea
from deeprank2.utils.grid import GridSettings, MapMethod
feature_modules = [components, conservation, contact, exposure, irc, surfacearea]
# Save data into 3D-graphs only
hdf5_paths = queries.process(
"<output_folder>/<prefix_for_outputs>",
feature_modules = feature_modules)
# Save data into 3D-graphs and 3D-grids
hdf5_paths = queries.process(
"<output_folder>/<prefix_for_outputs>",
feature_modules = feature_modules,
grid_settings = GridSettings(
# the number of points on the x, y, z edges of the cube
points_counts = [20, 20, 20],
# x, y, z sizes of the box in Å
sizes = [1.0, 1.0, 1.0]),
grid_map_method = MapMethod.GAUSSIAN)
Datasets
Data can be split in sets implementing custom splits according to the specific application. Assuming that the training, validation and testing ids have been chosen (keys of the HDF5 file/s), then the DeeprankDataset
objects can be defined.
GraphDataset
For training GNNs the user can create a GraphDataset
instance:
from deeprank2.dataset import GraphDataset
node_features = ["bsa", "res_depth", "hse", "info_content", "pssm"]
edge_features = ["distance"]
target = "binary"
train_ids = [<ids>]
valid_ids = [<ids>]
test_ids = [<ids>]
# Creating GraphDataset objects
dataset_train = GraphDataset(
hdf5_path = hdf5_paths,
subset = train_ids,
node_features = node_features,
edge_features = edge_features,
target = target
)
dataset_val = GraphDataset(
hdf5_path = hdf5_paths,
subset = valid_ids,
train_source = dataset_train
)
dataset_test = GraphDataset(
hdf5_path = hdf5_paths,
subset = test_ids,
train_source = dataset_train
)
GridDataset
For training CNNs the user can create a GridDataset
instance:
from deeprank2.dataset import GridDataset
features = ["bsa", "res_depth", "hse", "info_content", "pssm", "distance"]
target = "binary"
train_ids = [<ids>]
valid_ids = [<ids>]
test_ids = [<ids>]
# Creating GraphDataset objects
dataset_train = GridDataset(
hdf5_path = hdf5_paths,
subset = train_ids,
features = features,
target = target
)
dataset_val = GridDataset(
hdf5_path = hdf5_paths,
subset = valid_ids,
train_source = dataset_train,
)
dataset_test = GridDataset(
hdf5_path = hdf5_paths,
subset = test_ids,
train_source = dataset_train,
)
Training
Let's define a Trainer
instance, using for example of the already existing GINet
. Because GINet
is a GNN, it requires a dataset instance of type GraphDataset
.
from deeprank2.trainer import Trainer
from deeprank2.neuralnets.gnn.naive_gnn import NaiveNetwork
trainer = Trainer(
NaiveNetwork,
dataset_train,
dataset_val,
dataset_test
)
The same can be done using a CNN, for example CnnClassification
. Here a dataset instance of type GridDataset
is required.
from deeprank2.trainer import Trainer
from deeprank2.neuralnets.cnn.model3d import CnnClassification
trainer = Trainer(
CnnClassification,
dataset_train,
dataset_val,
dataset_test
)
By default, the Trainer
class creates the folder ./output
for storing predictions information collected later on during training and testing. HDF5OutputExporter
is the exporter used by default, but the user can specify any other implemented exporter or implement a custom one.
Optimizer (torch.optim.Adam
by default) and loss function can be defined by using dedicated functions:
import torch
trainer.configure_optimizers(torch.optim.Adamax, lr = 0.001, weight_decay = 1e-04)
Then the Trainer
can be trained and tested; the best model in terms of validation loss is saved by default, and the user can modify so or indicate where to save it using the train()
method parameter filename
.
trainer.train(
nepoch = 50,
batch_size = 64,
validate = True,
filename = "<my_folder/model.pth.tar>")
trainer.test()
Run a pre-trained model on new data
If you want to analyze new PDB files using a pre-trained model, the first step is to process and save them into HDF5 files as we have done above.
Then, the DeeprankDataset
instance for the newly processed data can be created. Do this by specifying the path for the pre-trained model in train_source
, together with the path to the HDF5 files just created. Note that there is no need of setting the dataset's parameters, since they are inherited from the information saved in the pre-trained model. Let's suppose that the model has been trained with GraphDataset
objects:
from deeprank2.dataset import GraphDataset
dataset_test = GraphDataset(
hdf5_path = "<output_folder>/<prefix_for_outputs>",
train_source = "<pretrained_model_path>"
)
Finally, the Trainer
instance can be defined and the new data can be tested:
from deeprank2.trainer import Trainer
from deeprank2.neuralnets.gnn.naive_gnn import NaiveNetwork
from deeprank2.utils.exporters import HDF5OutputExporter
trainer = Trainer(
NaiveNetwork,
dataset_test = dataset_test,
pretrained_model = "<pretrained_model_path>",
output_exporters = [HDF5OutputExporter("<output_folder_path>")]
)
trainer.test()
For more details about how to run a pre-trained model on new data, see the docs.
Computational performances
We measured the efficiency of data generation in DeepRank2 using the tutorials' PDB files (~100 data points per data set), averaging the results run on Apple M1 Pro, using a single CPU.
Parameter settings were: atomic resolution, distance_cutoff
of 5.5 Å, radius (for SRV only) of 10 Å. The features modules used were components
, contact
, exposure
, irc
, secondary_structure
, surfacearea
, for a total of 33 features for PPIs and 26 for SRVs (the latter do not use irc
features).
Data processing speed [seconds/structure] |
Memory [megabyte/structure] |
|
---|---|---|
PPIs | graph only: 2.99 (std 0.23) graph+grid: 11.35 (std 1.30) |
graph only: 0.54 (std 0.07) graph+grid: 16.09 (std 0.44) |
SRVs | graph only: 2.20 (std 0.08) graph+grid: 2.85 (std 0.10) |
graph only: 0.05 (std 0.01) graph+grid: 17.52 (std 0.59) |
Package development
If you're looking for developer documentation, go here.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for deeprank2-3.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4440e72eaa582b4869eca8f1e7817b8b0c61850e632f83385c6887c9ae4247c0 |
|
MD5 | 7ce45e369a548a264a8ecfa0df98d149 |
|
BLAKE2b-256 | 0d94d7dbeeade7fdd1fd1586b216fb30113533d78f33c6b3d9c3f6fad9023b01 |