Skip to main content

TorchMD-NET provides state-of-the-art neural networks potentials for biomolecular systems

Project description

Code style: black CI Documentation Status

TorchMD-NET

TorchMD-NET provides state-of-the-art neural networks potentials (NNPs) and a mechanism to train them. It offers efficient and fast implementations if several NNPs and it is integrated in GPU-accelerated molecular dynamics code like ACEMD, OpenMM and TorchMD. TorchMD-NET exposes its NNPs as PyTorch modules.

Documentation

Documentation is available at https://torchmd-net.readthedocs.io

Available architectures

Installation

TorchMD-Net is available as a pip installable wheel as well as in conda-forge

TorchMD-Net provides builds for CPU-only, CUDA 11.8 and CUDA 12.4. CPU versions are only provided as reference, as the performance will be extremely limited. Depending on which variant you wish to install, you can install it with one of the following commands:

# The following will install the CUDA 12.4 version by default
pip install torchmd-net 
# The following will install the CUDA 11.8 version
pip install torchmd-net --extra-index-url https://download.pytorch.org/whl/cu118 --extra-index-url https://us-central1-python.pkg.dev/pypi-packages-455608/cu118/simple
# The following will install the CUDA 12.4 version
pip install torchmd-net --extra-index-url https://download.pytorch.org/whl/cu124 --extra-index-url https://us-central1-python.pkg.dev/pypi-packages-455608/cu124/simple
# The following will install the CPU only version (not recommended)
pip install torchmd-net --extra-index-url https://download.pytorch.org/whl/cpu --extra-index-url https://us-central1-python.pkg.dev/pypi-packages-455608/cpu/simple   

Alternatively it can be installed with conda or mamba with one of the following commands. We recommend using Miniforge instead of anaconda.

mamba install torchmd-net cuda-version=11.8
mamba install torchmd-net cuda-version=12.4

Install from source

TorchMD-Net is installed using pip, but you will need to install some dependencies before. Check this documentation page.

Usage

Specifying training arguments can either be done via a configuration yaml file or through command line arguments directly. Several examples of architectural and training specifications for some models and datasets can be found in examples/. Note that if a parameter is present both in the yaml file and the command line, the command line version takes precedence. GPUs can be selected by setting the CUDA_VISIBLE_DEVICES environment variable. Otherwise, the argument --ngpus can be used to select the number of GPUs to train on (-1, the default, uses all available GPUs or the ones specified in CUDA_VISIBLE_DEVICES). Keep in mind that the GPU ID reported by nvidia-smi might not be the same as the one CUDA_VISIBLE_DEVICES uses.
For example, to train the Equivariant Transformer on the QM9 dataset with the architectural and training hyperparameters described in the paper, one can run:

mkdir output
CUDA_VISIBLE_DEVICES=0 torchmd-train --conf torchmd-net/examples/ET-QM9.yaml --log-dir output/

Run torchmd-train --help to see all available options and their descriptions.

Pretrained models

See here for instructions on how to load pretrained models.

Creating a new dataset

If you want to train on custom data, first have a look at torchmdnet.datasets.Custom, which provides functionalities for loading a NumPy dataset consisting of atom types and coordinates, as well as energies, forces or both as the labels. Alternatively, you can implement a custom class according to the torch-geometric way of implementing a dataset. That is, derive the Dataset or InMemoryDataset class and implement the necessary functions (more info here). The dataset must return torch-geometric Data objects, containing at least the keys z (atom types) and pos (atomic coordinates), as well as y (label), neg_dy (negative derivative of the label w.r.t atom coordinates) or both.

Custom prior models

In addition to implementing a custom dataset class, it is also possible to add a custom prior model to the model. This can be done by implementing a new prior model class in torchmdnet.priors and adding the argument --prior-model <PriorModelName>. As an example, have a look at torchmdnet.priors.Atomref.

Multi-Node Training

In order to train models on multiple nodes some environment variables have to be set, which provide all necessary information to PyTorch Lightning. In the following we provide an example bash script to start training on two machines with two GPUs each. The script has to be started once on each node. Once torchmd-train is started on all nodes, a network connection between the nodes will be established using NCCL.

In addition to the environment variables the argument --num-nodes has to be specified with the number of nodes involved during training.

export NODE_RANK=0
export MASTER_ADDR=hostname1
export MASTER_PORT=12910

mkdir -p output
CUDA_VISIBLE_DEVICES=0,1 torchmd-train --conf torchmd-net/examples/ET-QM9.yaml.yaml --num-nodes 2 --log-dir output/
  • NODE_RANK : Integer indicating the node index. Must be 0 for the main node and incremented by one for each additional node.
  • MASTER_ADDR : Hostname or IP address of the main node. The same for all involved nodes.
  • MASTER_PORT : A free network port for communication between nodes. PyTorch Lightning suggests port 12910 as a default.

Known Limitations

  • Due to the way PyTorch Lightning calculates the number of required DDP processes, all nodes must use the same number of GPUs. Otherwise training will not start or crash.
  • We observe a 50x decrease in performance when mixing nodes with different GPU architectures (tested with RTX 2080 Ti and RTX 3090).
  • Some CUDA systems might hang during a multi-GPU parallel training. Try export NCCL_P2P_DISABLE=1, which disables direct peer to peer GPU communication.

Cite

If you use TorchMD-NET in your research, please cite the following papers:

Main reference

@misc{pelaez2024torchmdnet,
title={TorchMD-Net 2.0: Fast Neural Network Potentials for Molecular Simulations}, 
author={Raul P. Pelaez and Guillem Simeon and Raimondas Galvelis and Antonio Mirarchi and Peter Eastman and Stefan Doerr and Philipp Thölke and Thomas E. Markland and Gianni De Fabritiis},
year={2024},
eprint={2402.17660},
archivePrefix={arXiv},
primaryClass={cs.LG}
}

TensorNet

@inproceedings{simeon2023tensornet,
title={TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials},
author={Guillem Simeon and Gianni De Fabritiis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=BEHlPdBZ2e}
}

Equivariant Transformer

@inproceedings{
tholke2021equivariant,
title={Equivariant Transformers for Neural Network based Molecular Potentials},
author={Philipp Th{\"o}lke and Gianni De Fabritiis},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=zNHzqZ9wrRB}
}

Graph Network

@article{Majewski2023,
  title = {Machine learning coarse-grained potentials of protein thermodynamics},
  volume = {14},
  ISSN = {2041-1723},
  url = {http://dx.doi.org/10.1038/s41467-023-41343-1},
  DOI = {10.1038/s41467-023-41343-1},
  number = {1},
  journal = {Nature Communications},
  publisher = {Springer Science and Business Media LLC},
  author = {Majewski,  Maciej and Pérez,  Adrià and Th\"{o}lke,  Philipp and Doerr,  Stefan and Charron,  Nicholas E. and Giorgino,  Toni and Husic,  Brooke E. and Clementi,  Cecilia and Noé,  Frank and De Fabritiis,  Gianni},
  year = {2023},
  month = sep 
}

Developer guide

Implementing a new architecture

To implement a new architecture, you need to follow these steps:
1. Create a new class in torchmdnet.models that inherits from torch.nn.Model. Follow TorchMD_ET as a template. This is a minimum implementation of a model:

class MyModule(nn.Module):
  def __init__(self, parameter1, parameter2):
	super(MyModule, self).__init__()
	# Define your model here
	self.layer1 = nn.Linear(10, 10)
	...
	# Initialize your model parameters here
	self.reset_parameters()

    def reset_parameters(self):
      # Initialize your model parameters here
	  nn.init.xavier_uniform_(self.layer1.weight)
	...
	
  def forward(self,
        z: Tensor, # Atomic numbers, shape (n_atoms, 1)
        pos: Tensor, # Atomic positions, shape (n_atoms, 3)
        batch: Tensor, # Batch vector, shape (n_atoms, 1). All atoms in the same molecule have the same value and are contiguous.
        q: Optional[Tensor] = None, # Atomic charges, shape (n_atoms, 1)
        s: Optional[Tensor] = None, # Atomic spins, shape (n_atoms, 1)
    ) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]:
	# Define your forward pass here
	scalar_features = ...
	vector_features = ...
	# Return the scalar and vector features, as well as the atomic numbers, positions and batch vector
	return scalar_features, vector_features, z, pos, batch

2. Add the model to the __all__ list in torchmdnet.models.__init__.py. This will make the tests pick your model up.
3. Tell models.model.create_model how to initialize your module by adding a new entry, for instance:

    elif args["model"] == "mymodule":
       from torchmdnet.models.torchmd_mymodule import MyModule
       is_equivariant = False # Set to True if your model is equivariant
       representation_model = MyModule(
           parameter1=args["parameter1"],
           parameter2=args["parameter2"],
           **shared_args, # Arguments typically shared by all models
       )

4. Add any new parameters required to initialize your module to scripts.train.get_args. For instance:

  parser.add_argument('--parameter1', type=int, default=32, help='Parameter1 required by MyModule')
  ...

5. Add an example configuration file to torchmd-net/examples that uses your model.
6. Make tests use your configuration file by adding a case to tests.utils.load_example_args. For instance:

if model_name == "mymodule":
       config_file = join(dirname(dirname(__file__)), "examples", "MyModule-QM9.yaml")

At this point, if your module is missing some feature the tests will let you know, and you can add it. If you add a new feature to the package, please add a test for it.

Code style

We use black. Please run black on your modified files before committing.

Testing

To run the tests, install the package and run pytest in the root directory of the repository. Tests are a good source of knowledge on how to use the different components of the package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

torchmd_net_cu12-2.4.12-cp313-cp313-win_amd64.whl (578.4 kB view details)

Uploaded CPython 3.13Windows x86-64

torchmd_net_cu12-2.4.12-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

torchmd_net_cu12-2.4.12-cp312-cp312-win_amd64.whl (578.3 kB view details)

Uploaded CPython 3.12Windows x86-64

torchmd_net_cu12-2.4.12-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

torchmd_net_cu12-2.4.12-cp311-cp311-win_amd64.whl (578.3 kB view details)

Uploaded CPython 3.11Windows x86-64

torchmd_net_cu12-2.4.12-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

torchmd_net_cu12-2.4.12-cp310-cp310-win_amd64.whl (578.3 kB view details)

Uploaded CPython 3.10Windows x86-64

torchmd_net_cu12-2.4.12-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

torchmd_net_cu12-2.4.12-cp39-cp39-win_amd64.whl (578.3 kB view details)

Uploaded CPython 3.9Windows x86-64

torchmd_net_cu12-2.4.12-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (5.4 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

File details

Details for the file torchmd_net_cu12-2.4.12-cp313-cp313-win_amd64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 9fbacd5e1f766d6ab4a3b6e9df57138c374d7cbb3ec938102a2604cd23f8d474
MD5 3790e961e1ff7399678aef0a2651bf9e
BLAKE2b-256 6d96cd924386385cd4693f6e609af9489183c488024b1cc629e5fab62568c5b1

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 2bdd13534934bb2ff18ce535749c168d4e35fd2e453a4de3b66d32045d2c6f18
MD5 abb13ccf806f38ce7a0e058faad47240
BLAKE2b-256 14014db232fa27af45ce2b6f6e0bcea4ae8e45b18ac14fa40c21ddcbb095a005

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 464c59e59e26d16fbc647ef86d9c196ec4bff4cf3e02f58420f511226283ad66
MD5 8c1d691ce858ae3470714af47efcb489
BLAKE2b-256 56ff9528aa577a1ba633203f8771c331f43aa48efd8f8b33c6f4cf354effd0d5

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 b77a5e9a47ee50d75f7684d29d04f843d845291aa99509a3399b6a0f5e2a8185
MD5 b52c770517c8133466e4e56805fadab9
BLAKE2b-256 ee0cd30c22fa2df246a10a1230184a01d7d73ea529df2af24b1a6b1ef2c00c63

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 95a999319c321205d35c76b41d001ee4245374a5fcd45d9b67ef0408af4e986c
MD5 3b3f5578cd8cda7c80581d002bc87f73
BLAKE2b-256 fca619618b22763f90fed6cfaa55943dc3dc8a9a950a8a9be6f87e14881191c3

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d119931394ec6e9bd6ab2ddc3fb95f3926ff9cce9a68e7cd14e56792df5fa7a3
MD5 939a170938e334a132a7e8374fcc3291
BLAKE2b-256 5607517e31b8bcb15759eab272194efcb93d33b24e4e3c884d6b1c2c5af00426

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 54e9d870c4b56dd190a05f9fa1fc830e66f44d61c53b844d71523e76c96957be
MD5 6022e4350349d605f29777c88e1889f0
BLAKE2b-256 21d75b58627358cdf54f03f8b27f49b73d9da8af3619dd5711c3cff080590a80

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 1a079cad64e5c7efeee7f559a83e20d4df9433cf010f595438e5b8b160e0e6c3
MD5 3d4cf62374d0d845a8fbab8e7ea0412c
BLAKE2b-256 8966dc1db9ecba6379e594bb87f02c05c258a9f086ba1fd59ce448c7bda5194e

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 47f1eb7c93e81b5a0c0d012178fb98f89b9592c7d72c4de92286098d13d4cd37
MD5 28433a2bc46fa07871a2b4487e7f72cf
BLAKE2b-256 894559e263236bb223a0e872400280f19de7cd56d4cecbf67eed9017d9507ba6

See more details on using hashes here.

File details

Details for the file torchmd_net_cu12-2.4.12-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torchmd_net_cu12-2.4.12-cp39-cp39-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 a093c9fd277e0dadba2d2de0fbd8196feb18c711fa814037aa66f741401d7338
MD5 98e36b908eb1969b0cc7e683b0de1847
BLAKE2b-256 c138c4cac3bc78b7caad4fc8e6dcd5c5f5ca4d4c454fdf43f57fd21dba6113eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page