Skip to main content

PyTorch implementation of the ANI neural network potential family

Project description

TorchANI 2 logo

License: MIT PyPI PyPI - Downloads CI Code style: black

TorchANI 2.0 is an open-source library that supports training, development, and research of ANI-style neural network interatomic potentials. It was originally developed and is currently maintained by the Roitberg group. For information and examples, please see the comprehensive documentation.

⚠️ Important: If you were using a previous version of TorchANI and your code does not work with TorchANI 2.0 check out the migration guide, there are very few breaking changes, most code should work with minimal modifications. If you can't figure something out please open a GitHub issue, we are here to help! In the meantime, you can pin torchani to version 2.2.4 (pip install 'torchani==2.2.4'), which does not have breaking changes. If you require the old state dicts of ANI models you can access them by calling .legacy_state_dict() instead of .state_dict()

If you find a bug in TorchANI 2.0, or have some feature request, also feel free to open a GitHub issue. TorchANI 2.0 is currently tested against PyTorch 2.8 and CUDA 12.8

If you find this work useful please cite the following articles:

To run molecular dynamics (full ML or ML/MM) with Amber (sander or pmemd) check out the TorchANI-Amber interface, and the relevant publications:

Installation

We recommend installing torchani inside a conda|mamba environment, or a venv.

⚠️ Important: Please install torchani with pip if you want the latest version, even if using a conda env since the torchani conda package is currently not maintained.

We also recommended you first install a specific torch version, with a specific CUDA toolkit backend, for example:

pip install torch==2.8 --index-url https://download.pytorch.org/whl/cu129

for the version with CUDA 12.9. This is not strictly required, but is easier if you want to control these versions. Note that TorchANI requires PyTorch >= 2.0.

Afterwards:

pip install torchani

TorchANI 2.0 provides C++ and CUDA extensions for accelerated computation of descriptors and network inference. In order to build the extensions, first install the CUDA Toolkit appropriate for your PyTorch version. You can follow the instructions in the official documentation for your system. Alternatively, if you are using a conda environment, you can install the toolkit with conda install nvidia::cuda-toolkit=12.9

After this, run:

ani build-extensions

By default the extensions are built for all detected SMs. If you want to build the extensions for specific SMs run for instance:

ani build-extensions --sm 8.0 --sm 8.9 

From source (GitHub repo)

To build and install TorchANI directly from the GitHub repo do the following:

# Clone the repo and cd to the directory
git clone https://github.com/aiqm/torchani.git
cd ./torchani

# Create a conda (or mamba) environment
# Note that environment.yaml contains many optional dependencies needed to
# build the compiled extensions, build the documentation, and run tests and tools
# You can comment these out if you are not planning to do that
conda env create -f ./environment.yaml

Instead of using a conda environment you can use a python venv, and install the torchani optional dependencies running pip install -r dev_requirements.txt.

pip install --no-deps -v -e .

Afterwards you can install the extensions with:

ani build-extensions

After this you can perform some optional steps if you installed the required dev dependencies:

# Download files needed for testing and building the docs (optional)
bash ./download-dev-data.sh

# Build the documentation (optional)
sphinx-build docs/src docs/build

# Manually run unit tests (optional)
cd ./tests
pytest -v .

This process works for most use cases, for more details regarding building the CUDA and C++ extensions refer to TorchANI CSRC.

From source in macOS

There is no CUDA support on macOS and TorchANI is untested with Apple Metal Performance Shaders (MPS). The environment.yaml file needs slight modifications if installing on macOS. Please consult the corresponding file and modify it before creating the conda environment.

GPU support

TorchANI 2.0 can be run in CUDA-enabled GPUs. This is highly recommended unless doing simple debugging or tests. If you don't run TorchANI on a GPU, expect degraded performance. TorchANI is untested with AMD GPUs (ROCm | HIP).

Command Line Interface

TorchANI 2.0 provides an executable script, ani, with some utilities. Check usage by calling torchani --help.

Building the TorchANI conda package (for developers)

The conda package can be built locally using the recipe in ./recipe, by running:

cd ./torchani_sandbox
conda install conda-build conda-verify
mkdir ./conda-pkgs/  # This dir must exist before running conda-build
conda build \
    -c pytorch -c nvidia -c conda-forge \
    --no-anaconda-upload \
    --output-folder ./conda-pkgs/ \
    ./recipe

The meta.yaml in the recipe assumes that the extensions are built using the system's CUDA Toolkit, located in /usr/local/cuda. If this is not possible, add the following dependencies to the host environment:

  • nvidia::cuda-libraries-dev={{ cuda }}
  • nvidia::cuda-nvcc={{ cuda }}
  • nvidia::cuda-cccl={{ cuda }}

and remove cuda_home=/usr/local/cuda from the build script. Note that doing this may significantly increase build time.

The CI (GitHub Actions Workflow) that tests that the conda pkg builds correctly runs only:

  • on pull requests that contain the string conda in the branch name.

The workflow that deploys the conda pkg to the internal server runs only:

  • on the default branch, at 00:00:00 every day
  • on pull requests that contain both the strings conda and release in the branch name

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchani-2.7.9.tar.gz (5.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchani-2.7.9-py3-none-any.whl (521.5 kB view details)

Uploaded Python 3

File details

Details for the file torchani-2.7.9.tar.gz.

File metadata

  • Download URL: torchani-2.7.9.tar.gz
  • Upload date:
  • Size: 5.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for torchani-2.7.9.tar.gz
Algorithm Hash digest
SHA256 48b7aea520c5155c0f320af28612cc43c9e3ba1cbd472955fbfadfdac3ecd378
MD5 37e30da78ef3b4d789a734c5b5391e16
BLAKE2b-256 be97419befa7445f10df677c26eec9d77afd797d8ed364e64a3d7ccd5ef894fe

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchani-2.7.9.tar.gz:

Publisher: deploy-pypi.yaml on aiqm/torchani

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchani-2.7.9-py3-none-any.whl.

File metadata

  • Download URL: torchani-2.7.9-py3-none-any.whl
  • Upload date:
  • Size: 521.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for torchani-2.7.9-py3-none-any.whl
Algorithm Hash digest
SHA256 85c3721549d678745a8d6e583a20f42a47e9da5b0b35c98a04203ced5cb048eb
MD5 f7cc66c141fe139f24d116ba8d24fef0
BLAKE2b-256 fc6dbf8f75dfd408557074c715230cb5893165a9958dd1aef3f7635725163570

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchani-2.7.9-py3-none-any.whl:

Publisher: deploy-pypi.yaml on aiqm/torchani

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page