Skip to main content

Training and applying large chemistry models for embeddings.

Project description

🚄 lchemme

GitHub Workflow Status (with branch) PyPI - Python Version PyPI

Pretraining large chemistry models for embedding.

Installation

The easy way

Install the pre-compiled version from PyPI:

pip install lchemme

From source

Clone the repository, then cd into it. Then run:

pip install -e .

Command-line usage

lchemme provides command-line utlities to pre-train BART models.

To get a list of commands (tools), do

$ lchemme --help
usage: lchemme [-h] [--version] {tokenize,pretrain,featurize} ...

Training and applying large chemistry models.

options:
  -h, --help            show this help message and exit
  --version, -v         show program's version number and exit

Sub-commands:
  {tokenize,pretrain,featurize}
                        Use these commands to specify the tool you want to use.
    tokenize            Tokenize the data inputs.
    pretrain            Pre-train a large language model using self-supervised learning.
    featurize           Get vector embeddings of a chemical dataset using a pre-trained large language model.

And to get help for a specific command, do

$ lchemme <command> --help

Tokenizing

The first step is to build a tokenizer for your dataset. LChemME works with BART models, and pulls their architecture from the Hugging Face Hub or a local directory. Training data can also be pulled from Hugging Face Hub with the hf:// prefix, or it can be loaded from local CSV files with a column containing SMILES strings.

lchemme tokenize \
    --train hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    --column smiles \
    --model facebook/bart-base \
    --output my-model

This should be relatively fast, but could take several hours for millions of rows.

In principle, existing tokenizers trained on natural language could work, but they have much larger vocabularies which are largely unused in SMILES.

Model pretraining

LChemME performs semi-supervised pretraining on a SMILES canonicalization task. This requires an understanding of chemical connectivity and atom precedence rules, forcing the model to build an internal representation of the chemical graph.

lchemme pretrain \
    --train hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    --column smiles \
    --test hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test \
    --model facebook/bart-base \
    --tokenizer my-model \
    --epochs 0.5 \
    --output my-model \
    --plot my-model/training-log

If you want to continue training, you can do so with the --resume flag.

lchemme pretrain \
    --train hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    --column smiles \
    --test hf://scbirlab/fang-2023-biogen-adme@scaffold-split:test \
    --model my-model \
    --epochs 0.5 \
    --output my-model \
    --plot my-model/training-log \
    --resume

The dataset state can only be restored if the --model was trained with LChemME and the dataset configuration is identical, i.e. --train, --column are the same.

Featurizing

With a trained model, you can generate embeddings of your chemical datasets, optionally with UMAP plots colored by chemical properties.

lchemme featurize \
    --train hf://scbirlab/fang-2023-biogen-adme@scaffold-split:train \
    --column smiles \
    --model my-model \
    --batch-size 16 \
    --method mean \
    --plot umap \
> featurized.csv

You can specify one or several aggregation functions with --method. LChemME aggregates the sequence dimension of the encoder and decoder, then concatenates them.

If you want to use additional columns containing numerical values to color the UMAP plots, provide the column names under --extras.

Documentation

(Full API documentation to come at ReadTheDocs.)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lchemme-0.0.3.tar.gz (21.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lchemme-0.0.3-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file lchemme-0.0.3.tar.gz.

File metadata

  • Download URL: lchemme-0.0.3.tar.gz
  • Upload date:
  • Size: 21.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for lchemme-0.0.3.tar.gz
Algorithm Hash digest
SHA256 61771ee1696666d9da98d8fa2a7f5294c03aec598b55e94b1d757981553e44a7
MD5 9b80150af2aa5a324bf073341bdcdbf7
BLAKE2b-256 6280383e7436d29bfd6f1116052e11652bd0e7d8dbe96c5b03dc4ed210b46fab

See more details on using hashes here.

File details

Details for the file lchemme-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: lchemme-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 22.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.23

File hashes

Hashes for lchemme-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 04e7cfd70306b866c217424d415f1efa8f2855b1d4963c21a0e96a6ae1867ca8
MD5 a030447b902fd9410c142cddb816cae9
BLAKE2b-256 9ca3bcb6ab1e64525745f0b6c71cd5cef3e4f1afc9ffb86e80a081c1488d84a4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page