Skip to main content

Implementation of the 'Gotta be SAFE: a new framework for molecular design' paper

Project description

:safety_vest: SAFE

Sequential Attachment-based Fragment Embedding (SAFE) is a novel molecular line notation that represents molecules as an unordered sequence of fragment blocks to improve molecule design using generative models.



Paper | Docs | 🤗 Model | 🤗 Training Dataset



PyPI Conda PyPI - Downloads Conda Code license Data License GitHub Repo stars GitHub Repo stars arXiv

test release code-check doc

Overview of SAFE

SAFE is the deep learning molecular representation. It's an encoding leveraging a peculiarity in the decoding schemes of SMILES, to allow representation of molecules as a contiguous sequence of connected fragments. SAFE strings are valid SMILES strings, and thus are able to preserve the same amount of information. The intuitive representation of molecules as an ordered sequence of connected fragments greatly simplifies the following tasks often encountered in molecular design:

  • de novo design
  • superstructure generation
  • scaffold decoration
  • motif extension
  • linker generation
  • scaffold morphing.

The construction of a SAFE strings requires defining a molecular fragmentation algorithm. By default, we use [BRICS], but any other fragmentation algorithm can be used. The image below illustrates the process of building a SAFE string. The resulting string is a valid SMILES that can be read by datamol or RDKit.


News 🚀

💥 2024/01/15 💥

  1. @IanAWatson has a C++ implementation of SAFE in LillyMol that is quite fast and use a custom fragmentation algorithm. Follow the installation instruction on the repo and checkout the docs of the CLI here: docs/Molecule_Tools/SAFE.md

Installation

You can install safe using pip:

pip install safe-mol

You can use conda/mamba:

mamba install -c conda-forge safe-mol

Datasets and Models

Type Name Infos Size Comment
Model datamol-io/safe-gpt 87M params 350M Default model
Training Dataset datamol-io/safe-gpt 1.1B rows 250GB Training dataset
Drug Benchmark Dataset datamol-io/safe-drugs 26 rows 20 kB Benchmarking dataset

Usage

Please refer to the documentation, which contains tutorials for getting started with safe and detailed descriptions of the functions provided, as well as an example of how to get started with SAFE-GPT.

API

We summarize some key functions provided by the safe package below.

Function Description
safe.encode Translates a SMILES string into its corresponding SAFE string.
safe.decode Translates a SAFE string into its corresponding SMILES string. The SAFE decoder just augment RDKit's Chem.MolFromSmiles with an optional correction argument to take care of missing hydrogen bonds.
safe.split Tokenizes a SAFE string to build a generative model.

Examples

Translation between SAFE and SMILES representations

import safe

ibuprofen = "CC(Cc1ccc(cc1)C(C(=O)O)C)C"

# SMILES -> SAFE -> SMILES translation
try:
    ibuprofen_sf = safe.encode(ibuprofen)  # c12ccc3cc1.C3(C)C(=O)O.CC(C)C2
    ibuprofen_smi = safe.decode(ibuprofen_sf, canonical=True)  # CC(C)Cc1ccc(C(C)C(=O)O)cc1
except safe.EncoderError:
    pass
except safe.DecoderError:
    pass

ibuprofen_tokens = list(safe.split(ibuprofen_sf))

Training/Finetuning a (new) model

A command line interface is available to train a new model, please run safe-train --help. You can also provide an existing checkpoint to continue training or finetune on you own dataset.

For example:

safe-train --config <path to config> \
    --model-path <path to model> \
    --tokenizer  <path to tokenizer> \
    --dataset <path to dataset> \
    --num_labels 9 \
    --torch_compile True \
    --optim "adamw_torch" \
    --learning_rate 1e-5 \
    --prop_loss_coeff 1e-3 \
    --gradient_accumulation_steps 1 \
    --output_dir "<path to outputdir>" \
    --max_steps 5

References

If you use this repository, please cite the following related paper:

@misc{noutahi2023gotta,
      title={Gotta be SAFE: A New Framework for Molecular Design},
      author={Emmanuel Noutahi and Cristian Gabellini and Michael Craig and Jonathan S. C Lim and Prudencio Tossou},
      year={2023},
      eprint={2310.10773},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

License

The training dataset is licensed under CC BY 4.0. See DATA_LICENSE for details. This code base is licensed under the Apache-2.0 license. See LICENSE for details.

Note that the model weights of SAFE-GPT are exclusively licensed for research purposes (CC BY-NC 4.0).

Development lifecycle

Setup dev environment

mamba create -n safe -f env.yml
mamba activate safe

pip install --no-deps -e .

Tests

You can run tests locally with:

pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

safe_mol-0.1.12.tar.gz (507.0 kB view details)

Uploaded Source

Built Distribution

safe_mol-0.1.12-py3-none-any.whl (60.4 kB view details)

Uploaded Python 3

File details

Details for the file safe_mol-0.1.12.tar.gz.

File metadata

  • Download URL: safe_mol-0.1.12.tar.gz
  • Upload date:
  • Size: 507.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for safe_mol-0.1.12.tar.gz
Algorithm Hash digest
SHA256 5dd16bbb413389116036d207aace284c1113d9b7017a822119705dbccef2f205
MD5 cde4f5c5fb6dcf279f901febe7d9578b
BLAKE2b-256 3c9d0a37f8a09ed4ee1df949c8c7dee0ceeafed98e7cf22c21527bc6707edb68

See more details on using hashes here.

File details

Details for the file safe_mol-0.1.12-py3-none-any.whl.

File metadata

  • Download URL: safe_mol-0.1.12-py3-none-any.whl
  • Upload date:
  • Size: 60.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.6

File hashes

Hashes for safe_mol-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 1283a1f92603f68a57743111d4e519cd8a6fa2bda01fa331f022c89ac5ed0cf0
MD5 fdcb026e2c565a381a6f63f1a1a088f9
BLAKE2b-256 23ff692852b168c129d6ea5ed81f217ad7f8a2af89728fe21327be255d6236c4

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page