Skip to main content

MDI engine drivers for LAMMPS — MACE and other ML forcefields via MolSSI Driver Interface

Project description

lammps-mdi

MDI engine drivers for LAMMPS — run ML forcefields (MACE, and future models) on GPU via the MolSSI Driver Interface, communicating with a standard LAMMPS binary (no Kokkos compilation needed).

Designed for use with SEAMM, but works with any LAMMPS workflow that supports MDI.

How it works

LAMMPS acts as an MDI driver: it handles atom positions, neighbor lists (at the coarse level), and time integration. The lammps-mdi engine process acts as an MDI engine: it receives coordinates from LAMMPS each step, evaluates the ML model on GPU, and returns energies, forces, and (if periodic) the stress tensor.

mpirun -np 1  mace-mdi  -mdi "..."   ← GPU process: MACE on A100
         : -np 1  lmp  -mdi "..." -in input.dat  ← CPU process: time integration

The two processes communicate over MPI via the MDI protocol.

Supported engines

Engine Status Notes
MACE MACE-torch ≥ 0.3, vesin-torch neighbor lists, cuEquivariance optional
NequIP Planned
SevenNet Planned

Installation

See INSTALL.md for full HPC instructions. Short version:

# 1. Load your LAMMPS module (provides Python, numpy, mpi4py, MDI)
module load LAMMPS/...

# 2. Create a venv that inherits the module stack
python -m venv --system-site-packages ~/venvs/lammps-mdi
source ~/venvs/lammps-mdi/bin/activate

# 3. Install PyTorch with the right CUDA wheel (check your CUDA version first)
lammps-mdi install-torch    # prints the correct command
# e.g.:
pip install torch --index-url https://download.pytorch.org/whl/cu121

# 4. Install lammps-mdi
pip install lammps-mdi[gpu]

# 5. Install bundled shell scripts
lammps-mdi install-scripts

# 6. Verify
lammps-mdi check

Usage

As a console script (recommended)

SEAMM_FF=/path/to/model.model \
mpirun --mca mpi_yield_when_idle 1 \
    -np 1 mdi_bind.sh mace-mdi -mdi "-role ENGINE -name MACE -method MPI" \
    : -np 1 mdi_bind.sh lmp -mdi "-role DRIVER -name LAMMPS -method MPI" -in input.dat

The mace-mdi command accepts several options:

mace-mdi --help

  -mdi MDI_STRING      MDI initialization string [required]
  --model PATH         Path to MACE model (overrides SEAMM_FF)
  --device DEVICE      PyTorch device (default: cuda:0)
  --dtype {float32,float64}
  --enable-cueq        Enable cuEquivariance acceleration
  --enable-oeq         Enable openEquivariance acceleration
  --log-level LEVEL    DEBUG / INFO / WARNING / ERROR

From lammps.ini (SEAMM)

[local]
installation = conda   # or modules, or local

gpu-code = mpirun --mca mpi_yield_when_idle 1 \
    -np 1 ~/SEAMM/bin/mdi_bind.sh \
    mace-mdi -mdi "-role ENGINE -name MACE -method MPI" \
    : -np 1 ~/SEAMM/bin/mdi_bind.sh \
    lmp -mdi "-role DRIVER -name LAMMPS -method MPI"

As a Python library

from lammps_mdi import MACEEngine

engine = MACEEngine(
    model_path="/path/to/model.model",
    device="cuda:0",
    default_dtype="float32",
    enable_cueq=True,
)
engine.run("-role ENGINE -name MACE -method MPI")

Shell scripts

The package bundles four helper scripts, installed via lammps-mdi install-scripts:

Script Purpose
mdi_bind.sh Binds engine (rank 0) to GPU + NUMA-local CPUs, driver (rank 1) to adjacent CPUs; starts nvidia-smi monitor. For standalone machines.
mdi_monitor.sh Lightweight wrapper for SLURM/PBS: only starts GPU monitoring. Scheduler handles binding.
gpu_bind.sh Per-rank GPU binding for native Kokkos LAMMPS (approach A).
cpu_bind.sh CPU-only binding using L3 cache groups (EPYC 7763).

The CPU/GPU mappings in mdi_bind.sh, gpu_bind.sh, and cpu_bind.sh are currently hard-coded for a dual-GPU EPYC 7763 system. They will be made configurable in a future release.

Requirements

Package Source Notes
Python ≥ 3.10 HPC module
numpy HPC module Do not reinstall
mpi4py HPC module
pymdi ≥ 1.4 pip PyPI package for import mdi
torch (CUDA) pip (special index) Install before lammps-mdi
mace-torch ≥ 0.3 pip
matscipy ≥ 0.8 pip CPU fallback neighbor list
pint ≥ 0.20 pip Unit conversion
vesin-torch ≥ 0.3 pip (optional) GPU neighbor lists, strongly recommended
cuequivariance* pip (optional) NVIDIA cuEquivariance acceleration

Contributing

Issues and pull requests are welcome at https://github.com/molssi-seamm/lammps-mdi.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lammps_mdi-0.1.2.tar.gz (35.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lammps_mdi-0.1.2-py3-none-any.whl (25.9 kB view details)

Uploaded Python 3

File details

Details for the file lammps_mdi-0.1.2.tar.gz.

File metadata

  • Download URL: lammps_mdi-0.1.2.tar.gz
  • Upload date:
  • Size: 35.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for lammps_mdi-0.1.2.tar.gz
Algorithm Hash digest
SHA256 0c2bc7ab050f2d92653d5ac7ccca90f0ccd73d65caedf0a0ec2d645dd36e5abb
MD5 b0102e616e03a9bf5688a282402503e6
BLAKE2b-256 4688988d2d2afde10ac3fd84277823f226bdda3b5230ec418dec13cca1e48693

See more details on using hashes here.

File details

Details for the file lammps_mdi-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: lammps_mdi-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 25.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for lammps_mdi-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 08000dd8dba49379da18bd47145472eaee9b0b5433ff9c3313ae7b4a024b0eed
MD5 0825ba3f41b6f5b6215c35d926ea5239
BLAKE2b-256 6decbdbd51c3d416f2786f1ec35405af9a32d07b8d1a922b00c984130a417e56

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page