Skip to main content

MDI engine drivers for LAMMPS — MACE and other ML forcefields via MolSSI Driver Interface

Project description

lammps-mdi

MDI engine drivers for LAMMPS — run ML forcefields (MACE, and future models) on GPU via the MolSSI Driver Interface, communicating with a standard LAMMPS binary (no Kokkos compilation needed).

Designed for use with SEAMM, but works with any LAMMPS workflow that supports MDI.

How it works

LAMMPS acts as an MDI driver: it handles atom positions, neighbor lists (at the coarse level), and time integration. The lammps-mdi engine process acts as an MDI engine: it receives coordinates from LAMMPS each step, evaluates the ML model on GPU, and returns energies, forces, and (if periodic) the stress tensor.

mpirun -np 1  mace-mdi  -mdi "..."   ← GPU process: MACE on A100
         : -np 1  lmp  -mdi "..." -in input.dat  ← CPU process: time integration

The two processes communicate over MPI via the MDI protocol.

Supported engines

Engine Status Notes
MACE MACE-torch ≥ 0.3, vesin-torch neighbor lists, cuEquivariance optional
NequIP Planned
SevenNet Planned

Installation

See INSTALL.md for full HPC instructions. Short version:

# 1. Load your LAMMPS module (provides Python, numpy, mpi4py, MDI)
module load LAMMPS/...

# 2. Create a venv that inherits the module stack
python -m venv --system-site-packages ~/venvs/lammps-mdi
source ~/venvs/lammps-mdi/bin/activate

# 3. Install PyTorch with the right CUDA wheel (check your CUDA version first)
lammps-mdi install-torch    # prints the correct command
# e.g.:
pip install torch --index-url https://download.pytorch.org/whl/cu121

# 4. Install lammps-mdi
pip install lammps-mdi[gpu]

# 5. Install bundled shell scripts
lammps-mdi install-scripts

# 6. Verify
lammps-mdi check

Usage

As a console script (recommended)

SEAMM_FF=/path/to/model.model \
mpirun --mca mpi_yield_when_idle 1 \
    -np 1 mdi_bind.sh mace-mdi -mdi "-role ENGINE -name MACE -method MPI" \
    : -np 1 mdi_bind.sh lmp -mdi "-role DRIVER -name LAMMPS -method MPI" -in input.dat

The mace-mdi command accepts several options:

mace-mdi --help

  -mdi MDI_STRING      MDI initialization string [required]
  --model PATH         Path to MACE model (overrides SEAMM_FF)
  --device DEVICE      PyTorch device (default: cuda:0)
  --dtype {float32,float64}
  --enable-cueq        Enable cuEquivariance acceleration
  --enable-oeq         Enable openEquivariance acceleration
  --log-level LEVEL    DEBUG / INFO / WARNING / ERROR

From lammps.ini (SEAMM)

[local]
installation = conda   # or modules, or local

gpu-code = mpirun --mca mpi_yield_when_idle 1 \
    -np 1 ~/SEAMM/bin/mdi_bind.sh \
    mace-mdi -mdi "-role ENGINE -name MACE -method MPI" \
    : -np 1 ~/SEAMM/bin/mdi_bind.sh \
    lmp -mdi "-role DRIVER -name LAMMPS -method MPI"

As a Python library

from lammps_mdi import MACEEngine

engine = MACEEngine(
    model_path="/path/to/model.model",
    device="cuda:0",
    default_dtype="float32",
    enable_cueq=True,
)
engine.run("-role ENGINE -name MACE -method MPI")

Shell scripts

The package bundles four helper scripts, installed via lammps-mdi install-scripts:

Script Purpose
mdi_bind.sh Binds engine (rank 0) to GPU + NUMA-local CPUs, driver (rank 1) to adjacent CPUs; starts nvidia-smi monitor. For standalone machines.
mdi_monitor.sh Lightweight wrapper for SLURM/PBS: only starts GPU monitoring. Scheduler handles binding.
gpu_bind.sh Per-rank GPU binding for native Kokkos LAMMPS (approach A).
cpu_bind.sh CPU-only binding using L3 cache groups (EPYC 7763).

The CPU/GPU mappings in mdi_bind.sh, gpu_bind.sh, and cpu_bind.sh are currently hard-coded for a dual-GPU EPYC 7763 system. They will be made configurable in a future release.

Requirements

Package Source Notes
Python ≥ 3.10 HPC module
numpy HPC module Do not reinstall
mpi4py HPC module
pymdi ≥ 1.4 pip PyPI package for import mdi
torch (CUDA) pip (special index) Install before lammps-mdi
mace-torch ≥ 0.3 pip
matscipy ≥ 0.8 pip CPU fallback neighbor list
pint ≥ 0.20 pip Unit conversion
vesin-torch ≥ 0.3 pip (optional) GPU neighbor lists, strongly recommended
cuequivariance* pip (optional) NVIDIA cuEquivariance acceleration

Contributing

Issues and pull requests are welcome at https://github.com/molssi-seamm/lammps-mdi.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lammps_mdi-0.1.1.tar.gz (32.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lammps_mdi-0.1.1-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file lammps_mdi-0.1.1.tar.gz.

File metadata

  • Download URL: lammps_mdi-0.1.1.tar.gz
  • Upload date:
  • Size: 32.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lammps_mdi-0.1.1.tar.gz
Algorithm Hash digest
SHA256 6705486c2941c1ed88ce654da320c3563089ac423fb02f044234eb90c15cdd14
MD5 c366ccdd6bf271de4a583c8d0f452385
BLAKE2b-256 a28b2ec8aa48ce4b2c72b627a01700a8c5b4e543bc25a1fe1766e8b7e6b607eb

See more details on using hashes here.

File details

Details for the file lammps_mdi-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: lammps_mdi-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lammps_mdi-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a5b5779dc619b5819cb7a8f967467a9a6cf26e2fb1742c2fc27f20a5567eb8cb
MD5 558f9ae52a831765ad406d2beeec9609
BLAKE2b-256 8cd00ef462954565e15da4aac62035cb3cba192727660b8e65834ac8df980f31

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page