Skip to main content

DeepBGC - Biosynthetic Gene Cluster detection and classification

Project description

DeepBGC: Biosynthetic Gene Cluster detection and classification

DeepBGC detects BGCs in bacterial and fungal genomes using deep learning. DeepBGC employs a Bidirectional Long Short-Term Memory Recurrent Neural Network and a word2vec-like vector embedding of Pfam protein domains. Product class and activity of detected BGCs is predicted using a Random Forest classifier.

BioConda Install PyPI - Downloads PyPI license PyPI version CI

DeepBGC architecture

📌 News 📌

  • DeepBGC 0.1.23: Predicted BGCs can now be uploaded for visualization in antiSMASH using a JSON output file
    • Install and run DeepBGC as usual based on instructions below
    • Upload antismash.json from the DeepBGC output folder using "Upload extra annotations" on the antiSMASH page
    • Predicted BGC regions and their prediction scores will be displayed alongside antiSMASH BGCs

Publications

A deep learning genome-mining strategy for biosynthetic gene cluster prediction
Geoffrey D Hannigan, David Prihoda et al., Nucleic Acids Research, gkz654, https://doi.org/10.1093/nar/gkz654

Install using conda (recommended)

  • Install Bioconda by following Step 1 and 2 from: https://bioconda.github.io/
  • Run conda install deepbgc to install DeepBGC and all of its dependencies

Install using pip (if conda is not available)

If you don't mind installing the HMMER and Prodigal dependencies manually, you can also install DeepBGC using pip:

Use DeepBGC

Download models and Pfam database

Before you can use DeepBGC, download trained models and Pfam database:

deepbgc download

You can display downloaded dependencies and models using:

deepbgc info

Detection and classification

DeepBGC pipeline

Detect and classify BGCs in a genomic sequence. Proteins and Pfam domains are detected automatically if not already annotated (HMMER and Prodigal needed)

# Show command help docs
deepbgc pipeline --help

# Detect and classify BGCs in mySequence.fa using DeepBGC detector.
deepbgc pipeline mySequence.fa

# Detect and classify BGCs in mySequence.fa using custom DeepBGC detector trained on your own data.
deepbgc pipeline --detector path/to/myDetector.pkl mySequence.fa

This will produce a mySequence directory with multiple files and a README.txt with file descriptions.

See Train DeepBGC on your own data section below for more information about training a custom detector or classifier.

Example output

See the DeepBGC Example Result Notebook. Data can be downloaded on the releases page

Detected BGC Regions

Train DeepBGC on your own data

You can train your own BGC detection and classification models, see deepbgc train --help for documentation and examples.

Training and validation data can be found in release 0.1.0 and release 0.1.5. You will need:

If you have any questions about using or training DeepBGC, feel free to submit an issue.

Preparing training data

The training examples need to be prepared in Pfam TSV format, which can be prepared from your sequence using deepbgc prepare. You will need to add an in_cluster column that will contain 0 for pfams outside a BGC and 1 for pfams inside a BGC. We recommend preparing a separate negative TSV and positive TSV file, where the column will be equal to all 0 or 1 respectively. A sequence_id column should be added, which will identify a continuous sequence of Pfams from a single sample (BGC or negative sequence). The samples are shuffled during training to present the model with a random order of positive and negative samples. Pfams with the same sequence_id value will be kept together.

! New in version 0.1.17 ! You can now prepare protein FASTA sequences into a Pfam TSV file using deepbgc prepare --protein.

JSON model training template files

DeepBGC is using JSON template files to define model architecture and training parameters. All templates can be downloaded in release 0.1.0.

JSON template for DeepBGC LSTM detector with pfam2vec is structured as follows:

{
  "type": "KerasRNN", - Model architecture (KerasRNN/DiscreteHMM/GeneBorderHMM)
  "build_params": { - Parameters for model architecture
    "batch_size": 16, - Number of splits of training data that is trained in parallel 
    "hidden_size": 128, - Size of vector storing the LSTM inner state
    "stateful": true - Remember previous sequence when training next batch
  },
  "fit_params": {
    "timesteps": 256, - Number of pfam2vec vectors trained in one batch
    "validation_size": 0, - Fraction of training data to use for validation (if validation data is not provided explicitly). Use 0.2 for 20% data used for testing.
    "verbose": 1, - Verbosity during training
    "num_epochs": 1000, - Number of passes over your training set during training. You probably want to use a lower number if not using early stopping on validation data.
    "early_stopping" : { - Stop model training when at certain validation performance
      "monitor": "val_auc_roc", - Use validation AUC ROC to observe performance
      "min_delta": 0.0001, - Stop training when the improvement in the last epochs did not improve more than 0.0001
      "patience": 20, - How many of the last epochs to check for improvement
      "mode": "max" - Stop training when given metric stops increasing (use "min" for decreasing metrics like loss)
    },
    "shuffle": true, - Shuffle samples in each epoch. Will use "sequence_id" field to group pfam vectors belonging to the same sample and shuffle them together 
    "optimizer": "adam", - Optimizer algorithm
    "learning_rate": 0.0001, - Learning rate
    "weighted": true - Increase weight of less-represented class. Will give more weight to BGC training samples if the non-BGC set is larger.
  },
  "input_params": {
    "features": [ - Array of features to use in model, see deepbgc/features.py
      {
        "type": "ProteinBorderTransformer" - Add two binary flags for pfam domains found at beginning or at end of protein
      },
      {
        "type": "Pfam2VecTransformer", - Convert pfam_id field to pfam2vec vector using provided pfam2vec table
        "vector_path": "#{PFAM2VEC}" - PFAM2VEC variable is filled in using command line argument --config
      }
    ]
  }
}

JSON template for Random Forest classifier is structured as follows:

{
  "type": "RandomForestClassifier", - Type of classifier (RandomForestClassifier)
  "build_params": {
    "n_estimators": 100, - Number of trees in random forest
    "random_state": 0 - Random seed used to get same result each time
  },
  "input_params": {
    "sequence_as_vector": true, - Convert each sample into a single vector
    "features": [
      {
        "type": "OneHotEncodingTransformer" - Convert each sequence of Pfams into a single binary vector (Pfam set)
      }
    ]
  }
}

Using your trained model

Since version 0.1.10 you can provide a direct path to the detector or classifier model like so:

deepbgc pipeline \
    mySequence.fa \
    --detector path/to/myDetector.pkl \
    --classifier path/to/myClassifier.pkl 

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepbgc-0.1.25.tar.gz (50.4 kB view details)

Uploaded Source

Built Distribution

deepbgc-0.1.25-py3-none-any.whl (64.1 kB view details)

Uploaded Python 3

File details

Details for the file deepbgc-0.1.25.tar.gz.

File metadata

  • Download URL: deepbgc-0.1.25.tar.gz
  • Upload date:
  • Size: 50.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.1.0.post20200119 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.0

File hashes

Hashes for deepbgc-0.1.25.tar.gz
Algorithm Hash digest
SHA256 4c73a5cb61ab70da5db0924d41019e49cf37c27d01c68cb471789c1cd4bbcc63
MD5 8fb57d9636c1bb5a2d08c1dd7efdcf90
BLAKE2b-256 37375c3a0d2e10b2737d48e17b804f18a69fc5a4933212c837f1f51525e465f4

See more details on using hashes here.

File details

Details for the file deepbgc-0.1.25-py3-none-any.whl.

File metadata

  • Download URL: deepbgc-0.1.25-py3-none-any.whl
  • Upload date:
  • Size: 64.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/45.1.0.post20200119 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.0

File hashes

Hashes for deepbgc-0.1.25-py3-none-any.whl
Algorithm Hash digest
SHA256 70e6eab321ca6e85909d8f3f662eb64cdf8d65a39115f7536b3d4b0c3290c3ce
MD5 19592f90333c1beadc2c3b586ce7ff8e
BLAKE2b-256 f8ca5f72caee1b0e619042b59349d7211aeb3c9c8428218bde716e7106b00a11

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page