Skip to main content

MDI engine drivers for LAMMPS — MACE and other ML forcefields via MolSSI Driver Interface

Project description

lammps-mdi

MDI engine drivers for LAMMPS — run ML forcefields (MACE, and future models) on GPU via the MolSSI Driver Interface, communicating with a standard LAMMPS binary (no Kokkos compilation needed).

Designed for use with SEAMM, but works with any LAMMPS workflow that supports MDI.

How it works

LAMMPS acts as an MDI driver: it handles atom positions, neighbor lists (at the coarse level), and time integration. The lammps-mdi engine process acts as an MDI engine: it receives coordinates from LAMMPS each step, evaluates the ML model on GPU, and returns energies, forces, and (if periodic) the stress tensor.

mpirun -np 1  mace-mdi  -mdi "..."   ← GPU process: MACE on A100
         : -np 1  lmp  -mdi "..." -in input.dat  ← CPU process: time integration

The two processes communicate over MPI via the MDI protocol.

Supported engines

Engine Status Notes
MACE MACE-torch ≥ 0.3, vesin-torch neighbor lists, cuEquivariance optional
NequIP Planned
SevenNet Planned

Installation

See INSTALL.md for full HPC instructions. Short version:

# 1. Load your LAMMPS module (provides Python, numpy, mpi4py, MDI)
module load LAMMPS/...

# 2. Create a venv that inherits the module stack
python -m venv --system-site-packages ~/venvs/lammps-mdi
source ~/venvs/lammps-mdi/bin/activate

# 3. Install PyTorch with the right CUDA wheel (check your CUDA version first)
lammps-mdi install-torch    # prints the correct command
# e.g.:
pip install torch --index-url https://download.pytorch.org/whl/cu121

# 4. Install lammps-mdi
pip install lammps-mdi[gpu]

# 5. Install bundled shell scripts
lammps-mdi install-scripts

# 6. Verify
lammps-mdi check

Usage

As a console script (recommended)

SEAMM_FF=/path/to/model.model \
mpirun --mca mpi_yield_when_idle 1 \
    -np 1 mdi_bind.sh mace-mdi -mdi "-role ENGINE -name MACE -method MPI" \
    : -np 1 mdi_bind.sh lmp -mdi "-role DRIVER -name LAMMPS -method MPI" -in input.dat

The mace-mdi command accepts several options:

mace-mdi --help

  -mdi MDI_STRING      MDI initialization string [required]
  --model PATH         Path to MACE model (overrides SEAMM_FF)
  --device DEVICE      PyTorch device (default: cuda:0)
  --dtype {float32,float64}
  --enable-cueq        Enable cuEquivariance acceleration
  --enable-oeq         Enable openEquivariance acceleration
  --log-level LEVEL    DEBUG / INFO / WARNING / ERROR

From lammps.ini (SEAMM)

[local]
installation = conda   # or modules, or local

gpu-code = mpirun --mca mpi_yield_when_idle 1 \
    -np 1 ~/SEAMM/bin/mdi_bind.sh \
    mace-mdi -mdi "-role ENGINE -name MACE -method MPI" \
    : -np 1 ~/SEAMM/bin/mdi_bind.sh \
    lmp -mdi "-role DRIVER -name LAMMPS -method MPI"

As a Python library

from lammps_mdi import MACEEngine

engine = MACEEngine(
    model_path="/path/to/model.model",
    device="cuda:0",
    default_dtype="float32",
    enable_cueq=True,
)
engine.run("-role ENGINE -name MACE -method MPI")

Shell scripts

The package bundles four helper scripts, installed via lammps-mdi install-scripts:

Script Purpose
mdi_bind.sh Binds engine (rank 0) to GPU + NUMA-local CPUs, driver (rank 1) to adjacent CPUs; starts nvidia-smi monitor. For standalone machines.
mdi_monitor.sh Lightweight wrapper for SLURM/PBS: only starts GPU monitoring. Scheduler handles binding.
gpu_bind.sh Per-rank GPU binding for native Kokkos LAMMPS (approach A).
cpu_bind.sh CPU-only binding using L3 cache groups (EPYC 7763).

The CPU/GPU mappings in mdi_bind.sh, gpu_bind.sh, and cpu_bind.sh are currently hard-coded for a dual-GPU EPYC 7763 system. They will be made configurable in a future release.

Requirements

Package Source Notes
Python ≥ 3.10 HPC module
numpy HPC module Do not reinstall
mpi4py HPC module
pymdi ≥ 1.4 pip PyPI package for import mdi
torch (CUDA) pip (special index) Install before lammps-mdi
mace-torch ≥ 0.3 pip
matscipy ≥ 0.8 pip CPU fallback neighbor list
pint ≥ 0.20 pip Unit conversion
vesin-torch ≥ 0.3 pip (optional) GPU neighbor lists, strongly recommended
cuequivariance* pip (optional) NVIDIA cuEquivariance acceleration

Contributing

Issues and pull requests are welcome at https://github.com/molssi-seamm/lammps-mdi.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lammps_mdi-0.1.0.tar.gz (31.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lammps_mdi-0.1.0-py3-none-any.whl (22.4 kB view details)

Uploaded Python 3

File details

Details for the file lammps_mdi-0.1.0.tar.gz.

File metadata

  • Download URL: lammps_mdi-0.1.0.tar.gz
  • Upload date:
  • Size: 31.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lammps_mdi-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a086ee373e0bb536eab309209d741dce281e0d5fbaf1469a936a8b7682a9c383
MD5 acd4eebe286627be4b48ac3aca33f688
BLAKE2b-256 d22feba1ab79507c22bd1343276e749ff51e7bf6d8112ddea9766d2786e9a8e2

See more details on using hashes here.

File details

Details for the file lammps_mdi-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: lammps_mdi-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 22.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for lammps_mdi-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6f141d578723d196a87e6787cf26b6491514d8e9820e3c5e7ad66d898e4cbf97
MD5 c6b4dcfecec6639ab463f2b79ad3f903
BLAKE2b-256 a7a22badf3abe2cd6880572a8d5e48b32bc1fdebb6d9f4f0cc43cc33409442db

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page