Skip to main content

A pytorch toolkit for calculating material properties using MLIPs

Project description

TorchSim

CI codecov This project supports Python 3.12+ PyPI Zenodo

TorchSim is a next-generation open-source atomistic simulation engine for the MLIP era. By rewriting the core primitives of atomistic simulation in Pytorch, it allows orders of magnitude acceleration of popular machine learning potentials.

  • Automatic batching and GPU memory management allowing significant simulation speedup
  • Support for MACE, Fairchem, SevenNet, ORB, MatterSim, metatomic, and Nequix MLIP models
  • Support for classical lennard jones, morse, and soft-sphere potentials
  • Molecular dynamics integration schemes like NVE, NVT Langevin, and NPT Langevin
  • Relaxation of atomic positions and cell with gradient descent and FIRE
  • Swap monte carlo and hybrid swap monte carlo algorithm
  • An extensible binary trajectory writing format with support for arbitrary properties
  • A simple and intuitive high-level API for new users
  • Integration with ASE, Pymatgen, and Phonopy
  • and more: differentiable simulation, elastic properties, custom workflows...

Quick Start

Here is a quick demonstration of many of the core features of TorchSim: native support for GPUs, MLIP models, ASE integration, simple API, autobatching, and trajectory reporting, all in under 40 lines of code.

Running batched MD

import torch
import torch_sim as ts

# run natively on gpus
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# easily load the model from mace-mp
from mace.calculators.foundations_models import mace_mp
from torch_sim.models.mace import MaceModel
mace = mace_mp(model="small", return_raw_model=True)
mace_model = MaceModel(model=mace, device=device)

from ase.build import bulk
cu_atoms = bulk("Cu", "fcc", a=3.58, cubic=True).repeat((2, 2, 2))
many_cu_atoms = [cu_atoms] * 50
trajectory_files = [f"Cu_traj_{i}.h5md" for i in range(len(many_cu_atoms))]

# run them all simultaneously with batching
final_state = ts.integrate(
    system=many_cu_atoms,
    model=mace_model,
    n_steps=50,
    timestep=0.002,
    temperature=1000,
    integrator=ts.Integrator.nvt_langevin,
    trajectory_reporter=dict(filenames=trajectory_files, state_frequency=10),
)
final_atoms_list = final_state.to_atoms()

# extract the final energy from the trajectory file
final_energies = []
for filename in trajectory_files:
    with ts.TorchSimTrajectory(filename) as traj:
        final_energies.append(traj.get_array("potential_energy")[-1])

print(final_energies)

Running batched relaxation

To then relax those structures with FIRE is just a few more lines.

# relax all of the high temperature states
relaxed_state = ts.optimize(
    system=final_state,
    model=mace_model,
    optimizer=ts.Optimizer.fire,
    autobatcher=True,
    init_kwargs=dict(cell_filter=ts.CellFilter.frechet),
)

print(relaxed_state.energy)

Speedup

TorchSim achieves up to 100x speedup compared to ASE with popular MLIPs.

Speedup comparison

This figure compares the time per atom of ASE and torch_sim. Time per atom is defined as the number of atoms / total time. While ASE can only run a single system of n_atoms (on the $x$ axis), torch_sim can run as many systems as will fit in memory. On an H100 80 GB card, the max atoms that could fit in memory was ~8,000 for EGIP, ~10,000 for MACE-MPA-0, ~22,000 for Mattersim V1 1M, ~2,500 for SevenNet, and ~9000 for PET-MAD. This metric describes model performance by capturing speed and memory usage simultaneously.

Installation

PyPI Installation

pip install torch-sim-atomistic

Installing from source

git clone https://github.com/TorchSim/torch-sim
cd torch-sim
pip install .

Examples

To understand how TorchSim works, start with the comprehensive tutorials in the documentation.

Core Modules

TorchSim's package structure is summarized in the API reference documentation and drawn as a treemap below.

TorchSim package treemap

Contributing

If you are interested in contributing, please join our slack and check out the contributing.md.

License

TorchSim is released under an MIT license.

Citation

If you use TorchSim in your research, please cite our publication.

@article{cohen2025torchsim,
  title={TorchSim: An efficient atomistic simulation engine in PyTorch},
  author={Cohen, Orion and Riebesell, Janosh and Goodall, Rhys and Kolluru, Adeesh and Falletta, Stefano and Krause, Joseph and Colindres, Jorge and Ceder, Gerbrand and Gangan, Abhijeet S},
  journal={AI for Science},
  volume={1},
  number={2},
  pages={025003},
  year={2025},
  publisher={IOP Publishing},
  doi={10.1088/3050-287X/ae1799}
}

Due Credit

We aim to recognize all duecredit for the decades of work that TorchSim builds on top of, an automated list of references can be obtained for the package by running DUECREDIT_ENABLE=yes uv run --with-editable . --extra docs --extra test python -m duecredit <(printf 'import pytest\nraise SystemExit(pytest.main(["-q"]))\n'). This list is incomplete and we welcome PRs to help improve our citation coverage.

To collect citations for a specific tutorial run, for example autobatching, use:

DUECREDIT_ENABLE=yes uv run --with-editable . --extra docs --extra test python -m duecredit examples/tutorials/autobatching_tutorial.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_sim_atomistic-0.6.0.tar.gz (219.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torch_sim_atomistic-0.6.0-py3-none-any.whl (253.5 kB view details)

Uploaded Python 3

File details

Details for the file torch_sim_atomistic-0.6.0.tar.gz.

File metadata

  • Download URL: torch_sim_atomistic-0.6.0.tar.gz
  • Upload date:
  • Size: 219.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.14

File hashes

Hashes for torch_sim_atomistic-0.6.0.tar.gz
Algorithm Hash digest
SHA256 0dbd775b7a32ac94f57af169b2b019e539df2b4c4f52d22e37d6a0fd61c96f19
MD5 c8d8e074c3e8374c5e9f8ff921b2cbf8
BLAKE2b-256 8f6e69975dffb94de7628f954f594ddeb3e852c7e47da53d359068b077fc39d1

See more details on using hashes here.

File details

Details for the file torch_sim_atomistic-0.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for torch_sim_atomistic-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 27b6e0afbb8af91cf2369772cb3c2ad77cba9804ede6b8d45d4e705186bbda87
MD5 e0c536e4e38eb3f2ed2e1daac4d4ad1d
BLAKE2b-256 c9bc0de4aa9dfef9464376be1466f8bc7ab74d12a70584dcfbec0da3e10a4279

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page