Skip to main content

Deep learning with uncertainty for general purpose, chemistry, and taxonomic tasks.

Project description

🧐 duvidnn

GitHub Workflow Status (with branch) PyPI - Python Version PyPI

duvidnn is a suite of python tools for calculating confidence and information metrics for deep learning. It provides a higher-level framework for calculating confidence and information metrics of general purpose, taxonomic and chemistry-specific neural networks.

As a bonus, duvidnn also provides an easy command-line interface for training and testing models.

Installation

The easy way

You can install the precompiled version directly using pip.

$ pip install duvidnn

If you want to use duvidnn for chemistry machine learning and AI, use:

$ pip install duvidnn[chem]

For integrating taxonomic information with vectome, use:

$ pip install duvidnn[bio]

You can install both:

$ pip install duvidnn[bio,chem]

From source

Clone the repository, then cd into it. Then run:

$ pip install -e .

Command-line interface

duvidnn has a command-line interface for training and checkpointing the built-in models.

$ duvidnn --help
usage: duvidnn [-h] [--version] {hyperprep,train,predict,split,percentiles} ...

Calculating exact and approximate confidence and information metrics for deep learning on general purpose and chemistry tasks.

options:
  -h, --help            show this help message and exit
  --version, -v         show program's version number and exit

Sub-commands:
  {hyperprep,train,predict,split,percentiles}
                        Use these commands to specify the tool you want to use.
    hyperprep           Prepare inputs for hyperparameter search.
    train               Train a PyTorch model.
    predict             Make predictions and calculate uncertainty using a duvidnn checkpoint.
    split               Make chemical train-test-val splits on out-of-core datasets.
    percentiles         Add columns indicating whether rows are in a percentile.

In all cases, you can get further options with duvidnn <command> --help, for example:

duvidnn train --help

Annotating top percentiles

You can add columns to datasets which annotate the top percentiles of named columns. This is compatible with extremely large datasets that don't fit in memory.

$ duvidnn percentiles \
    hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    --columns clogp tpsa \
    --percentiles 1 5 10 \
    --output percentiles.parquet \
    --plot percentiles-plot.png \
    --structure smiles

In all cases, input data can be:

  • Path to a local file in CSV, Parquet, Arrow or HF Dataset format
  • or a remote dataset hosted on 🤗 Datasets, indicated by hf:// followed by the repository name

Data splitting

There are utilities for out-of-memory scaffold and (approximate using FAISS) spectral splitting of datasets that don't fit in memory. Make it random but reproducible with --seed, otherwise a deterministic bin-packing algorithm is used.

$ duvidnn split hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    --train .7 \
    --validation .15 \
    --structure smiles \
    --type faiss \
    --seed 1 \
    --output faiss.csv \
    --plot faiss.png

Model training and evaluation

To train:

$ duvidnn train -1 hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    -2 hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test \
    --class fingerprint \
    --structure smiles \
    --ensemble-size 10 \
    --epochs 10 \
    --learning-rate 0.001 \
    --output model.dv

Different model classes can be specified:

Hyperparameters

There is also a simple hyperparameter utility.

$ printf '{"model_class": "fingerprint", use_2d": [true, false], "n_units": 16, "n_hidden": 3}' | duvidnn hyperprep -o hyperopt.json

This generates a file containing all combinations. It can be indexed (0-based) with the -i <int> option to supply a specific training configuration like so:

$ duvidnn train \
    -1 hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    -2 hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test \
    -c hyperopt.json \
    -i 0 \
    --output model.dv

In this way, you can generate all the hyperparameter combinations, then systematically test them one by one (or in parallel using HPC or other methods).

Predictions

You can make predictions on datasets using duvidnn predict. Optionally, you can restrict prediction to only a chunk of the dataset using --start and --stop. This can be useful to parallelize prediction across chunks.

When predicting, there is also the option to calculate uncertainty metrics like ensemble variance (--variance), Tanimoto nearest neighbor distance to training set (--tanimoto, for chemistry models), doubtscore (--doubtscore), and information sensitivity (--information-sensitivity).

$ duvidnn predict \
    --test hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test \
    --checkpoint model.dv \
    --start 100 \
    --end 200 \
    --variance \
    --tanimoto \
    --doubtscore \
    -y clogp \
    --output predictions.parquet

Outputs can be made in CSV, Parquet, Arrow, or HF Dataset format. This is inferred from the file extension of the filename provided for --output.

Note that information sensitivity using default parameters can be very slow for large models with large training data, since it must calculate second-order parameter gradients for every training example. There are approximations which can speed it up substantially, at the cost of exactness:

  • The --last-layer option gives the biggest speed-up, since it restricts the calculation to only the output layer of the model.
  • Using --optimality assumes the model has been trained to an optimum (i.e. gradient of loss is zero).
  • The --approx bekas option uses a fast approximation of second-order gradients.

Python API

duvidnn provides python classes and functions for custom analysis.

Neural networks

The core of duvidnn is the ModelBox, which is a container for a trainable model and its training data. These are connected because measures of confidence and information gain depend directly on the information or evidence already seen by the model.

There are several ModelBox classes for specific deep learning architechtures in pytorch.

>>> from duvidnn.autoclass import MODELBOX_REGISTRY
>>> from pprint import pprint
>>> pprint(MODELBOX_REGISTRY)
{'bilinear': <class 'duvidnn.torch.modelbox.modelboxes.TorchBilinearModelBox'>,
 'bilinear-fp': <class 'duvidnn.torch.modelbox.modelboxes.TorchBilinearFingerprintModelBox'>,
 'chemprop': <class 'duvidnn.torch.modelbox.modelboxes.ChempropModelBox'>,
 'cnn': <class 'duvidnn.torch.modelbox.modelboxes.TorchCNN2DModelBox'>,
 'fingerprint': <class 'duvidnn.torch.modelbox.modelboxes.TorchFingerprintModelBox'>,
 'mlp': <class 'duvidnn.torch.modelbox.modelboxes.TorchMLPModelBox'>}

The modelboxes chemprop, fingerprint, and bilinear-fp featurize SMILES representations of chemical structures. The modelbox mlp is a general purpose multilayer perceptron.

You can set up your model with various training parameters.

from duvidnn.autoclass import AutoClass
modelbox = AutoClass(
    "fingerprint",
    n_units=16,
    n_hidden=2,
    ensemble_size=10,
    structure_column="smiles",
)

The internal neural network is instantiated on loading training data.

modelbox.load_training_data(
    data="hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train",
    inputs="smiles", # column name of the predictor values
    labels="clogp",  # column name of the values to predict
)

The data can be a remote 🤗 dataset, in which case it is automatically downloaded. The "@" indicates the dataset configuration, and the ":" indicates the specific data split.

Alternatively, the training data can be a local CSV or TSV file, or in-memory Pandas dataframes or dictionaries.

With training data loaded, the model can be trained!

modelbox.train(
    val_filename="hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test",
    epochs=10,
    batch_size=128,
)

The ModelBox.train() method uses pytorch Lightning under the hood, so other options such as callbacks for this framework should be accepted.

Saving and sharing a trained model

duvidnn provides a basic checkpointing mechanism to save model weights and training data to later reload.

modelbox.save_checkpoint("checkpoint.dv")
modelbox.load_checkpoint("checkpoint.dv")

Evaluating and predicting on new data

duvidnn ModelBoxes provide methods for evaluating predictions on new data.

predictions, metrics = modelbox.evaluate(
    data="hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test",
)

Calculating uncertainty and information metrics

duvidnn ModelBoxes provide methods for calculating prediction variance of ensembles, doubtscore, and information sensitivity.

doubtscore = modelbox.doubtscore(
    data="hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test"
)
info_sens = modelbox.information_sensitivity(
    data="hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test",
    approx="bekas",  # approximate Hessian diagonals
    n=10,
)

To avoid storing large datasets in memory, duvidnn uses 🤗 datasets under the hood to cache data. Results can be instantiated in memory with a little effort. For example:

doubtscore = doubtscore.to_pandas()

See the 🤗 datasets documentation for more.

More advanced Python API: Implementing a new ModelBox

Bringing a new pytorch model to duvidnn is relatively straightforward. First, write your model, adding Lighning logic and a create_model() method:

from typing import Callable, Iterable, List, Mapping, Optional

from torch.nn import BatchNorm1d, Dropout, Linear, Module, SiLU, Sequential
from duvidnn.torch import TorchEnsembleMixin
from duvidnn.torch.models.utils.lt import LightningMixin
from torch import nn
from torch.optim import Adam, Optimizer

class SimpleMLP(nn.Module, LightningMixin):

    def __init__(
        self, 
        n_input: int, 
        n_units: int = 16, 
        n_out: int = 1,
        activation: Callable = nn.SiLU,  # Smooth activation to prevent vanishing gradient
        learning_rate: float = .01,
        optimizer: Optimizer = Adam,
        *args, **kwargs
    ):
        super().__init__(*args, **kwargs)
        self.n_input = n_input
        self.n_units = n_units
        self.activation = activation
        self.n_out = n_out
        self.model_layers = nn.Sequential([
            nn.Linear(self.n_input, self.n_units),
            self.activation(),
            nn.Linear(self.n_units, self.n_out),
        ])
        # Lightning logic
        self._init_lightning(
            optimizer=optimizer, 
            learning_rate=learning_rate, 
            model_attr='model_layers',  # the attribute containing the model
        )

    def forward(self, x):
        return self.model_layers(x)

Then subclass duvidnn.torch.modelbox.TorchModelBoxBase and implement the create_model() method, which should simply return your instantiated model. If you want to preprocess input data on the fly, then add a preprocess_data() method which takes a data dictionary and returns a data dictionary.

from typing import Dict

from duvidnn.torch.modelbox import TorchModelBoxBase
import numpy as np

class MLPModelBox(TorchModelBoxBase):
    
    def __init__(self, *args, **kwargs):
        super().__init__()
        self._mlp_kwargs = kwargs

    def create_model(self, *args, **kwargs):
        self._model_config.update(kwargs)  # makes sure model checkpointing saves the keyword args
        return SimpleMLP(
            n_input=self.input_shape[-1],  # defined on data loading
            n_out=self.output_shape[-1], 
            *args, 
            **self._model_config,
            **self._mlp_kwargs,  # if init kwargs are relevant to model creation
        )

    # Define this method if your data needs preprocessing
    @staticmethod
    def preprocess_data(data: Dict[str, np.ndarray], _in_key, _out_key, **kwargs) -> Dict[str, np.ndarray]:
        return {
            _in_key: your_featurizer(data[_in_key]), 
            _out_key: np.asarray(data[_out_key])
        }

If you want to build ModelBoxes based on a framework other than pytorch, you can subclass the duvidnn.base.ModelBoxBase abstract class, making sure to implement its abstract methods.

Issues, problems, suggestions

Add to the issue tracker.

Documentation

(To come at ReadTheDocs.)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

duvidnn-0.0.1.tar.gz (70.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

duvidnn-0.0.1-py3-none-any.whl (86.2 kB view details)

Uploaded Python 3

File details

Details for the file duvidnn-0.0.1.tar.gz.

File metadata

  • Download URL: duvidnn-0.0.1.tar.gz
  • Upload date:
  • Size: 70.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.24

File hashes

Hashes for duvidnn-0.0.1.tar.gz
Algorithm Hash digest
SHA256 8e330c3a1367bbfa845a0c19d6ed05f9f1a7a8d6636530e76c7ef413f421d585
MD5 5d8d4d0a716f97a7838b19715483cbf7
BLAKE2b-256 3a84a12c04339750f2a4444e4ee20ce58e75c53d6a1637c4b74f9ce97b6fa394

See more details on using hashes here.

File details

Details for the file duvidnn-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: duvidnn-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 86.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.24

File hashes

Hashes for duvidnn-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e7066f8ddd47b1b2b698682b8b9ead63f2a9ab46a976678cadf3fd13a40b8310
MD5 9da16f95f0711d34c8c988d3efb0d593
BLAKE2b-256 ed7e5db2d4f78561be4c16fc4d2e6c29aae7fabed9855bb4114a0b5b711adf80

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page