Skip to main content

AutoML system for building trustworthy peptide bioactivity predictors

Project description

logo

AutoPeptideML

AutoML system for building trustworthy peptide bioactivity predictors

Tutorials GitHub Open In Colab

AutoPeptideML allows researchers without prior knowledge of machine learning to build models that are:

  • Trustworthy: Robust evaluation following community guidelines for ML evaluation reporting in life sciences DOME.
  • Interpretable: Output contains a PDF summary of the model evaluation explaining how to interpret the results to understand how reliable the model is.
  • Reproducible: Output contains all necessary information for other researchers to reproduce the training and verify the results.
  • State-of-the-art: Models generated with this system are competitive with state-of-the-art handcrafted approaches.

We recommend the use of the Google Collaboratory Notebook for users that want greater control of the model training process and the webserver for users that prefer ease of use.

Contents

Table of Contents

Installation

Installing in a conda environment is recommended. For creating the environment, please run:

conda create -n autopeptideml python
conda activate autopeptideml

1. Python Package

1.1.From PyPI

pip install autopeptideml

1.2. Directly from source

pip install git+https://github.com/IBM/AutoPeptideML

2. Preparing AutoPeptideML-Peptipedia Database

Download and prepare the Peptipedia Database by running:

autopeptideml-setup

If any error happens during the download, it may be caused by not having the latest version of the gdown library. Please run the following code to get the latest version:

pip install --upgrade --no-cache-dir gdown

3. Third-party dependencies

For using MMSeqs as alignment algorithm is necessary install it in the environment:

conda install -c bioconda mmseqs2

For using Needleman-Wunch:

conda install -c bioconda emboss

If installation not in conda environment, please check installation instructions for your particular device:

  • Linux:

    wget https://mmseqs.com/latest/mmseqs-linux-avx2.tar.gz
    tar xvfz mmseqs-linux-avx2.tar.gz
    export PATH=$(pwd)/mmseqs/bin/:$PATH
    
    sudo apt install emboss
    
    sudo apt install emboss
    
  • Windows: Download binaries from EMBOSS and MMSeqs2-latest

  • Mac:

    sudo port install emboss
    brew install mmseqs2
    

Benchmark data

Data used to benchmark our approach has been selected from the benchmarks collected by Du et al, 2023. A new set of benchmarks was constructed from the original set following the new data acquisition and dataset partitioning methods within AutoPeptideML. To download the datasets:

  • Original UniDL4BioPep Benchmarks: Please check the project Github Repository.
  • New AutoPeptideML Benchmarks: Can be downloaded from this link.

Documentation

1. Model builder options

Dataset construction

  • dataset: File with positive peptides in FASTA or CSV file. It can also contain negative peptides in which case the files should contain the labels (0: negative or 1: positive) either in the header (FASTA) or in column Y (CSV).
  • --balance: If True, it balances the datasets by oversampling the underrepresented label.
  • --autosearch: If True, it searches for negative peptides.
  • --autosearch_tags: Comma separated list of tags that may overlap with positive activity that are going to be excluded from the negative peptides.
  • --autosearch_proportion: Negative:positive ration when automatically drawing negative controls from the bioactive peptides database (Default: 1.0).

Output

  • --outputdirdir: Output directory (Default: ./apml_result/apml_result).

Protein Language Model

  • --plm: Protein Language Model for computing peptide representations. Available options: esm2-8m, esm2-35m, esm2-150m, esm2-650m, esm2-3b, esm2-15b, esm1b, prot-t5-xxl, prot-t5-xl, protbert, prost-t5. (Default: esm2-8m). Please note: Larger Models might not fit into GPU RAM, if it is necessary for your purposes, please create a new issue.
  • --plm_batch_size: Number of peptides for which to batch the PLM computation.(Default: 12).

Dataset Partitioning

  • --test_partition: Whether to divide the dataset in train/test splits. (Default: True).
  • --test_threshold: Maximum sequence identity allowed between train and test. (Default: 0.3).
  • --test_size: Proportion of data to be assigned to evaluation set. (Default: 0.2).
  • --test_alignment: Alignment algorithm used for computing sequence identities. Available options: mmseqs, mmseqs+prefilter, needle. (Default: mmseqs+prefilter).
  • --splits: Path to directory with train and test splits. Expected contents: train.csv and test.csv.
  • --val_partition: Whether to divide dataset in train/validation folds.
  • --val_method: Method to use for creating train/validation folds. Options available: random, graph-part. (Default: random)
  • --val_threshold: Maximum sequence identity allowed between train and validation. (Default: 0.5).
  • --val_alignment: Alignment algorithm used for computing sequence identities. Available options: mmseqs, mmseqs+prefilter, needle. (Default: mmseqs+prefilter).
  • --val_n_folds: Number of folds (Default: 10).
  • --folds: Path to directory with train/validation folds. Expected contents: train_{fold}.csv and valid_{fold}.csv.

Model Selection and Hyperparameter Optimisation

  • --config: Name of one of the pre-defined configuration files (see autopeptideml/data/configs) or path to a custom configuration file (see next section).

Other

  • --verbose: Whether to display information about runtime (Default: True).
  • --threads: Number of threads to use for parallelization. (Default: Number of cores in the machine).
  • --seed: Seed for pseudorandom number generators. Controls stochastic processes. (Default: 42)
2. Predict
  • dataset: File with problem peptides in FASTA or CSV file.
  • --ensemble: Path to the a file containing a previous AutoPeptideML result.
  • --outputdir: Output directory (Default: ./apml_predictions).
  • --verbose: Whether to display information about runtime (Default: True).
  • --threads: Number of threads to use for parallelization. (Default: Number of cores in the machine).
  • --plm: Protein Language Model for computing peptide representations. Must be the same as used to train the model. Available options: esm2-8m, esm2-35m, esm2-150m, esm2-650m, esm2-3b, esm2-15b, esm1b, prot-t5-xxl, prot-t5-xl, protbert, prost-t5. (Default: esm2-8m). Please note: Larger Models might not fit into GPU RAM, if it is necessary for your purposes, please create a new issue.
  • --plm_batch_size: Number of peptides for which to batch the PLM computation.(Default: 12).
3. Hyperparameter Optimisation and model selection

The experiment configuration is a file in JSON format describing the hyperparameter optimisation search space and the composition of the final ensemble. The first level of the file is a dictionary with a single key (ensemble or model_selection or model_selection) and a list of search spaces for the hyperparameter optimisation. For each model within the ensemble list, n different models will be trained one per cross-validation fold; in the case of model_selection, only one of the algorithms will comprise the final ensemble; in the case of model_selection, only one of the algorithms will comprise the final ensemble.

Each experiment requires the following fields:

  • model: Defines the ML algorithm. Options: KNearestNeighbours, SVM, RFC, XGBoost, LGBM, MLP, and UniDL4BioPep. More options will be added in subsequent releases and they can be implemented upon request.
  • trials: Defines the number of iterations for the hyperparameter optimisation search.
  • optimization_metric: Defines the metric that should be used for directing the optimisation search. Always, the metric will be calculated as the average across the n cross-validation folds. For the metrics available all of the binary classification within the list in the scikit-learn documentation are supported (Default: Matthew's correlation coefficient, MCC).
  • hyperparameter-space: List of dictionaries that defines the hyperparameter search space proper. Each of the dictionaries within correspond to a different hyperparameter and may have the following fields:
    • name: It has to correspond with the corresponding hyperparameter in the model implementation. Most of the simpler ML models use the scikit-learn implementation, LGBM uses the Microsoft implementation (More information on LGBM Repository) and UniDL4BioPep uses the PyTorch implementation (More information on UniDL4BioPep PyTorch Repository), though for this model hyperparameter optimisation is not recommended.
    • type: Defines the type of hyperparameter. Options: int, float, or categorical.
    • min and max: Defines the lower and upper bounds of the search space for types int and float.
    • log: Boolean value that defines whether the search should be done in logarithmic space or not. Accelerates searches through vast spaces for example for learning rates (1e-7 to 1). It is not optional.
    • value: Defines the list of options available for a hyperparameter of type categorical for example types of kernel (linear, rbf, sigmoid) for a Support Vector Machine.

There is an example available in the default configuration file.

4. API

Please check the Code reference documentation

Examples

autopeptideml dataset.csv
autopeptideml dataset.csv --val_method graph-part --val_threshold 0.3 --val_alignment needle
autopeptideml dataset.csv
autopeptideml dataset.csv --val_method graph-part --val_threshold 0.3 --val_alignment needle

License

AutoPeptideML is an open-source software licensed under the MIT Clause License. Check the details in the LICENSE file.

Credits

Special thanks to Silvia González López for designing the AutoPeptideML logo and to Marcos Martínez Galindo for his aid in setting up the AutoPeptideML webserver.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autopeptideml-0.3.2.tar.gz (2.3 MB view details)

Uploaded Source

Built Distribution

autopeptideml-0.3.2-py3-none-any.whl (5.0 MB view details)

Uploaded Python 3

File details

Details for the file autopeptideml-0.3.2.tar.gz.

File metadata

  • Download URL: autopeptideml-0.3.2.tar.gz
  • Upload date:
  • Size: 2.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for autopeptideml-0.3.2.tar.gz
Algorithm Hash digest
SHA256 fd34ded7d5644eb36a154880d7ed3039fe0aaeeddc91610e362f62cf5be6a32b
MD5 67103121e9d51bee1ad27e554d712a8b
BLAKE2b-256 48164617848287b4313f0edd05a4670ef7b3612adc7eda132031b240fa69ed7e

See more details on using hashes here.

File details

Details for the file autopeptideml-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for autopeptideml-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 51842538b65fe4ac451f936fd5ec341caf554b8fd05db086c7cc805439a52c6a
MD5 b0edf6816218f58163450791ea4705f5
BLAKE2b-256 0e176dbcb22abfaed24cf9dc9c5a3bcc1f2577f7099b1b1a1ff8028273c887b7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page