Skip to main content

A scoring, benchmarking and evaluation framework for goal directed generative models

Project description

MolScore: A scoring and evaluation framework for de novo drug design

alt text

Overview

The aim of this codebase is to simply and flexibly automate the scoring of de novo compounds in generative models via the subpackage molscore. As well as, facilitate evaluation downstream via the subpackage moleval. An objective is designed via a JSON file which can be shared to propose new benchmark objectives, or to conduct multi-parameter objectives for drug design.

Custom scoring functions can be implemented following the guidelines here

Contributions and/or ideas for added functionality are welcomed!

A description of this software:

Thomas, M., O'Boyle, N.M., Bender, A., de Graaf, C. MolScore: A scoring and evaluation framework for de novo drug design. chemRxiv (2023). https://doi.org/10.26434/chemrxiv-2023-c4867

This code here was used in the following publications:

  1. Thomas, M., Smith, R.T., O’Boyle, N.M. et al. Comparison of structure- and ligand-based scoring functions for deep generative models: a GPCR case study. J Cheminform 13, 39 (2021). https://doi.org/10.1186/s13321-021-00516-0
  2. Thomas M, O'Boyle NM, Bender A, de Graaf C. Augmented Hill-Climb increases reinforcement learning efficiency for language-based de novo molecule generation. J Cheminform 14, 68 (2022). https://doi.org/10.1186/s13321-022-00646-z

Installation

Mamba should be used to install the molscore environment as it is considerably better than conda. If you do not have mamba first install this package manager following the instructions here.

git clone https://github.com/MorganCThomas/MolScore.git
cd MolScore
mamba env create -f environment.yml
mamba activate molscore
python setup.py develop

Note: Depending if you have conda already installed, you may have to use conda activate instead and point to the env path directly for example, conda activate ~/mambaforge/envs/molscore

Implementation into a generative model

Implementing molscore is as simple as importing it, instantiating it (pointing to the configuration file) and then scoring molecules. This should easily fit into most generative model pipelines.

from molscore import MolScore

# Instantiate MolScore, assign the model name and point to configuration file describing the objective
ms = MolScore(model_name='test', task_config='molscore/configs/QED.json')
              
# Calling it simply scores a list of smiles (SMILES) - to be integrated into a for loop during model optimization
scores = ms.score(SMILES)
    
# When the program exits, all recorded smiles will be saved and the monitor app (if selected) will be closed

Note: Other MolScore parameters include output_dir to override any specified in the task_config.

Alternatively, a can be set budget to specify the maximum number of molecules to score, after the budget is reached ms.finished will be set to True which can be evaluated to decide when to exit an optimization loop. For example,

from molscore import MolScore
ms = MolScore(model_name='test', task_config='molscore/configs/QED.json', budget=10000)
while not ms.finished:
    scores = ms.score(SMILES)

A benchmark mode is also available that can be used to iterate over a selection of tasks defined in config files, or a set of pre-defined benchmarks that come packaged with MolScore including GuacaMol, GuacaMol_Scaffold, MolOpt, 5HT2A_PhysChem, 5HT2A_Selectivity, 5HT2A_Docking, LibINVENT_Exp1, LinkINVENT_Exp3.

from molscore import MolScoreBenchmark

# As an example, configs re-implementing GuacaMol are available as a preset benchmark, or custom tasks can be provided 
msb = MolScoreBenchmark(model_name='test', benchmark='GuacaMol', budget=10000)
for task in msb:
    # < Initialize generative model >
    while not task.finished:
        # < Sample smiles from generative model >
        scores = task.score(smiles)
        # < Update generative model >
# When the program exits, a summary of performance will be saved

Note: A generative language model with MolScore already implemented can be found here.

Usage

Here is a GIF demonstrating writing a config file with the help of the GUI, running MolScore in a mock example (scoring randomly sampled SMILES), and monitoring the output with another GUI.

alt text

Once molscore has been implemented into a generative model, the objective needs to be defined! Writing a JSON file is a pain though so instead a streamlit app is provided do help. Simply call molscore_config from the command line (a simple wrapper to streamlit run molscore/gui/config.py)

alt text

Once the configuration file is saved, simply point to this file path and run de novo molecule optimization. If running with the monitor app you'll be able to investigate molecules as they're being generated. Simply call molscore_monitor from the command line (a wrapper to streamlit run molscore/gui/monitor.py).

alt text

Functionality

Scoring functionality present in molscore, some scoring functions require external softwares and necessary licenses.

Type Method
Docking Glide, Smina, OpenEye, GOLD, PLANTS, rDock, Vina, Gnina
Ligand preparation RDKit->Epik, Moka->Corina, Ligprep, Gypsum-DL
3D Similarity ROCS, Open3DAlign
2D Similarity Fingerprint similarity (any RDKit fingerprint and similarity measure), substructure match/filter, Applicability domain
Predictive models Scikit-learn (classification/regression), PIDGINv5a, ChemProp
Synthesizability RAscore, AiZynthFinder, SAscore, ReactionFilters (Scaffold decoration)
Descriptors RDKit, Maximum consecutive rotatable bonds, Penalized LogP, LinkerDescriptors (Fragment linking) etc.
Transformation methods Linear, linear threshold, step threshold, Gaussian
Aggregation methods Arithmetic mean, geometric mean, weighted sum, product, weighted product, auto-weighted sum/product, pareto front
Diversity filters Unique, Occurence, memory assisted + ScaffoldSimilarityECFP

a PIDGINv5 is a suite of pre-trained RF classifiers on ~2,300 ChEMBL31 targets

Performance metrics present in moleval, many of which are from GuacaMol or MOSES.

Type metric
Intrinsic property Validity, Uniqueness, Scaffold uniqueness, Internal diversity (1 & 2), Sphere exclusion diversityb, Solow Polasky diversityg, Scaffold diversity, Functional group diversityc, Ring system diversityc, Filters (MCF & PAINS), Purchasabilityd
Extrinsic propertya Novelty, FCD, Analogue similaritye, Analogue coverageb, Functional group similarity, Ring system similarity, Single nearest neighbour similarity, Fragment similarity, Scaffold similarity, Outlier bits (Silliness)f, Wasserstein distance (LogP, SA Score, NP score, QED, Weight)

a In reference to a specified external dataset
b As in our previous work here
c Adaption based on Zhang et al.
d Using molbloom
e Similar to Blaschke et al.
f Based on SillyWalks by Pat Walters
g Based on Liu et al.

Parallelisation

Most scoring functions implemented can be parallelised over multiple CPUs simply using pythons multiprocessing by specifying the n_jobs parameter. Some more computationally expensive scoring functions such as molecular docking are parallelised using a Dask to allow distributed parallelisation accross compute nodes (cluster parameter). Either supply the number of CPUs to utilize on a single compute node to the scheduler address setup via the Dask CLI.

To setup a dask cluster first start a scheduler by running (the scheduler address will be printed to the terminal)

mamba activate <env>
dask scheduler

Now to start workers accross multiple nodes, simply SSH to a connected node and run

mamba activate <env>
dask worker <scheduler_address> --nworkers <n_jobs> --nthreads 1

Repeat this for each node you wish to add to the cluster (ensure the conda environment and any other dependencies are loaded as you would normally). Then supply modify the config so that cluster: <scheduler_address>.

Optional: Sometimes you may not want to keep editing this parameter in the config file and so environment variables can be set which will override anything provided in the config. To do this, before running MolScore export either of the following variables respectively.

export MOLSCORE_NJOBS=<n_jobs>
export MOLSCORE_CLUSTER=<scheduler_address>

Note: It is recommended to not use more than the number of logical cores available on the a particular machine, for example, on a 12-core machine (6 logical cores hyperthreaded) I would not recommend more than 6 workers as it may overload CPU.

Tests

Some unittests are available.

cd molscore/tests
python -m unittest

Or any individual test, for example

python test_docking.py

Or, you can test a configuration file, for example

python test_configs.py <path to config1> <path to config2> <path to dir of configs>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

MolScore-1.1.tar.gz (38.6 MB view hashes)

Uploaded Source

Built Distribution

MolScore-1.1-py3-none-any.whl (39.0 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page