Skip to main content

MLIP wrapper for AMBER QM/MM (UMA, ORB, MACE, AIMNet2)

Project description

amber-mlips

DOI

MLIP (Machine Learning Interatomic Potential) wrapper for AMBER QM/MM via sander EXTERN interface.

Four model families are currently supported:

  • UMA (fairchem) — default model: uma-s-1p1
  • ORB (orb-models) — default model: orb-v3-conservative-omol
  • MACE (mace) — default model: MACE-OMOL-0
  • AIMNet2 (aimnetcentral) — default model: aimnet2

All backends provide energy and gradient for AMBER QM/MM molecular dynamics and optimization. An optional point-charge embedding correction with xTB is available via --embedcharge.

Requires Python 3.9 or later and AmberTools (sander). AmberTools is free of charge (GNU GPL); sander / sander.MPI are LGPL 2.1.

Quick Start (Default = UMA)

  1. (Optional) Install AmberTools if not already installed. AmberTools25 or later is recommended.
conda config --add channels conda-forge
conda config --add channels dacase
conda config --set channel_priority strict
conda install ambertools-dac=25

The conda package includes sander, sander.MPI (OpenMPI), and requires Python 3.12.

  1. (Optional) Install xTB. Only needed for --embedcharge.
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"

The libblas/liblapack specs prevent the BLAS library from being replaced with the slower netlib. See TECHNICAL_NOTE.md for details.

To build xTB from source (required for CPCM-X solvation via --solvent-model cpcmx):

git clone --depth 1 https://github.com/grimme-lab/xtb.git
cd xtb
cmake -B build -S . \
  -DCMAKE_BUILD_TYPE=Release \
  -DWITH_CPCMX=ON \
  -DBLAS_LIBRARIES=/path/to/libblas.so \
  -DLAPACK_LIBRARIES=/path/to/liblapack.so
make -C build tblite-lib -j8   # build tblite first to avoid a parallel build race
make -C build xtb-exe -j8

The built binary is at build/xtb. Add it to your PATH or use --xtb-cmd /path/to/build/xtb. For CPCM-X, set CPXHOME to the CPCM-X source directory (e.g., build/_deps/cpcmx-src/). Requires GCC >= 10 (gfortran 8 causes internal compiler errors). See also: https://github.com/grimme-lab/xtb, https://github.com/grimme-lab/CPCM-X

  1. Install PyTorch suitable for your CUDA environment.
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
  1. Install the package with the UMA backend. For ORB/MACE/AIMNet2, replace uma accordingly.
pip install "amber-mlips[uma]"
  1. Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
huggingface-cli login

UMA model is on Hugging Face Hub. You need to log in once (See https://github.com/facebookresearch/fairchem):

  1. Prepare an AMBER input file. Only qm_theory and ml_keywords are plugin-specific; everything else is native AMBER &qmmm. For examples, see inputs in examples/*.in.
 &cntrl
  imin=0, irest=0, ntx=1,
  nstlim=1000, dt=0.001,
  ntb=0, ntt=3, gamma_ln=5.0,
  ntpr=10, ntwx=10, ntwr=100,
  ifqnt=1,
 /
 &qmmm
  qmmask=':2',
  qmcharge=0,
  spin=1,
  qm_theory='uma',
  ml_keywords='--model uma-s-1p1',
  qmcut=12.0,
  qmshake=0,
 /

Other backends:

  qm_theory='orb',    ml_keywords='--model orb-v3-conservative-omol',
  qm_theory='mace',   ml_keywords='--model MACE-OMOL-0',
  qm_theory='aimnet2', ml_keywords='--model aimnet2',
  1. Run with amber-mlips and standard sander-like flags.
amber-mlips -O \
  -i mlmm.in -o mlmm.out \
  -p leap.parm7 -c md.rst7 \
  -r mlmm.rst7 -x mlmm.nc -inf mlmm.info

Point-Charge Embedding Correction (xTB)

--embedcharge adds an xTB-based correction for electrostatic embedding of MM point charges into the QM region.

Install xTB (if not already installed in Quick Start step 1):

conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"

Use --embedcharge in ml_keywords:

  ml_keywords='--model uma-s-1p1 --embedcharge',

This computes dE = E_xTB(embed) - E_xTB(no-embed) and adds the correction to MLIP energy and forces.

ML-Only MD (Full-System MLIP)

See the ML-Only MD section in OPTIONS.md for full-system MLIP molecular dynamics (qmmask='@*') with implicit solvent (non-periodic only).

MM MPI Parallelism

The ML evaluation path is always single-process. The MM side (sander) can use MPI:

amber-mlips --mm-ranks 16 -O -i mlmm.in -o mlmm.out -p leap.parm7 -c md.rst7 -r mlmm.rst7
  • --mm-ranks 1 (default): runs sander directly.
  • --mm-ranks > 1: uses mpirun/mpiexec + sander.MPI. Requires AmberTools built with MPI support.

Note: AMBER 24 (and earlier) has a bug in qm2_extern_module.F90 that corrupts forces in multi-rank EXTERN runs. Use AmberTools 25 or later for --mm-ranks > 1.
Also place --mm-ranks between amber-mlips and -O (e.g., amber-mlips --mm-ranks 16 -O ...).

Installing Model Families

pip install "amber-mlips[uma]"         # UMA (default)
pip install "amber-mlips[orb]"         # ORB
pip install "amber-mlips[mace]"        # MACE
pip install "amber-mlips[aimnet2]"     # AIMNet2
pip install amber-mlips                # core only (no ML backend)

Note: UMA and MACE have a dependency conflict (e3nn). Use separate environments.

Local install:

git clone https://github.com/t-0hmura/amber-mlips.git
cd amber-mlips
pip install -e ".[uma]"

Model download notes:

  • UMA: Hosted on Hugging Face Hub. Run huggingface-cli login once.
  • ORB / MACE / AIMNet2: Downloaded automatically on first use.

Examples

Ready-to-run examples are in the examples/ directory with a protein-ligand system (1IL4, 50,387 atoms, 115 QM atoms).

File Backend Description
uma.in UMA uma-s-1p1
orb.in ORB orb-v3-conservative-omol
mace.in MACE MACE-OMOL-0
aimnet2.in AIMNet2 aimnet2
uma_embedcharge.in UMA uma-s-1p1 + xTB embedcharge
uma_mlonly_implicit.in UMA ML-only + xTB implicit solvent (non-periodic, ALPB)

UMA, ORB, and AIMNet2 can share one environment; MACE requires a separate one (see Installing Model Families). Run the example matching your installed backend:

cd examples
amber-mlips --mm-ranks 16 -O -i uma.in -o uma.out -p leap.parm7 -c md.rst7 -r uma.rst7

Performance Reference

Benchmark on a protein-ligand system (1IL4, 50,387 atoms, 115 ML-region atoms):

UMA UMA + embedcharge
Model uma-s-1p1 uma-s-1p1 --embedcharge
Total atoms 50,387 50,387
ML region atoms 115 115
dt 0.0005 ps 0.0005 ps
Per step ~135 ms ~579 ms
Speed ~321 ps/day ~75 ps/day

Environment: AMD Ryzen 7950X3D / 4.20 GHz (32 threads) + RTX 5080 (VRAM 16 GB), RAM 128 GB. --mm-ranks 16 used for MM MPI parallelism.

Upstream Model Sources

Advanced Options

See OPTIONS.md for all wrapper and backend-specific options. For internal architecture details, see TECHNICAL_NOTE.md.

Troubleshooting

  • amber-mlips command not found — Activate the conda/venv environment where the package is installed.
  • sander not found — Install AmberTools (conda install ambertools-dac=25), or use --sander-bin /path/to/sander.
  • UMA model download fails (401/403) — Run huggingface-cli login. Some models require access approval on Hugging Face.
  • MPI errors with --mm-ranks > 1 — Ensure mpirun/mpiexec is available. Use --mpi-bin to specify explicitly.
  • Works interactively but fails in batch jobs — Use --sander-bin with an absolute path.

References

Citation

If you use this package, please cite:

@software{ohmura2026ambermlips,
  author       = {Ohmura, Takuto},
  title        = {amber-mlips},
  year         = {2026},
  version      = {1.1.1},
  url          = {https://github.com/t-0hmura/amber-mlips},
  license      = {MIT},
  doi          = {10.5281/zenodo.18942483}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amber_mlips-1.2.1.tar.gz (3.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

amber_mlips-1.2.1-py3-none-any.whl (56.8 kB view details)

Uploaded Python 3

File details

Details for the file amber_mlips-1.2.1.tar.gz.

File metadata

  • Download URL: amber_mlips-1.2.1.tar.gz
  • Upload date:
  • Size: 3.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for amber_mlips-1.2.1.tar.gz
Algorithm Hash digest
SHA256 127d9e52a3348ea8cc70934ba6c708a076a7e858e26ab71b776f967093c132c4
MD5 825142c884bf90d03bf9362c6ae2d670
BLAKE2b-256 0d157402ec107de66ffa6ed4d661c915821ae835af9349c6dc4d935e53b91c63

See more details on using hashes here.

Provenance

The following attestation bundles were made for amber_mlips-1.2.1.tar.gz:

Publisher: release.yml on t-0hmura/amber-mlips

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file amber_mlips-1.2.1-py3-none-any.whl.

File metadata

  • Download URL: amber_mlips-1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 56.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for amber_mlips-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3ae38f82b092990b642c183a12c73a061586b7b71298eb6d4be3d980284b2c8a
MD5 650ae862dff27bb6ec1e9dbc3ceacb3c
BLAKE2b-256 42236fe9fb654e4d4e24c54ba2e8a3b4f2cf5eba1827ad9d4c8430da139102e8

See more details on using hashes here.

Provenance

The following attestation bundles were made for amber_mlips-1.2.1-py3-none-any.whl:

Publisher: release.yml on t-0hmura/amber-mlips

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page