Skip to main content

MLIP wrapper for AMBER QM/MM (UMA, ORB, MACE, AIMNet2)

Project description

amber-mlips

DOI

MLIP (Machine Learning Interatomic Potential) wrapper for AMBER QM/MM via sander EXTERN interface.

Four model families are currently supported:

  • UMA (fairchem) — default model: uma-s-1p1
  • ORB (orb-models) — default model: orb-v3-conservative-omol
  • MACE (mace) — default model: MACE-OMOL-0
  • AIMNet2 (aimnetcentral) — default model: aimnet2

All backends provide energy and gradient for AMBER QM/MM molecular dynamics and optimization. An optional point-charge embedding correction with xTB is available via --embedcharge.

Requires Python 3.9 or later and AmberTools (sander). AmberTools is free of charge (GNU GPL); sander / sander.MPI are LGPL 2.1.

Quick Start (Default = UMA)

  1. (Optional) Install AmberTools if not already installed. AmberTools25 or later is recommended.
conda config --add channels conda-forge
conda config --add channels dacase
conda config --set channel_priority strict
conda install ambertools-dac=25

The conda package includes sander, sander.MPI (OpenMPI), and requires Python 3.12.

  1. (Optional) Install xTB. Only needed for --embedcharge.
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"

The libblas/liblapack specs prevent the BLAS library from being replaced with the slower netlib. See TECHNICAL_NOTE.md for details. You can also build xTB from source.

  1. Install PyTorch suitable for your CUDA environment.
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
  1. Install the package with the UMA backend. For ORB/MACE/AIMNet2, replace uma accordingly.
pip install "amber-mlips[uma]"
  1. Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
huggingface-cli login

UMA model is on Hugging Face Hub. You need to log in once (See https://github.com/facebookresearch/fairchem):

  1. Prepare an AMBER input file. Only qm_theory and ml_keywords are plugin-specific; everything else is native AMBER &qmmm. For examples, see inputs in examples/*.in.
 &cntrl
  imin=0, irest=0, ntx=1,
  nstlim=1000, dt=0.001,
  ntb=0, ntt=3, gamma_ln=5.0,
  ntpr=10, ntwx=10, ntwr=100,
  ifqnt=1,
 /
 &qmmm
  qmmask=':2',
  qmcharge=0,
  spin=1,
  qm_theory='uma',
  ml_keywords='--model uma-s-1p1',
  qmcut=12.0,
  qmshake=0,
 /

Other backends:

  qm_theory='orb',    ml_keywords='--model orb-v3-conservative-omol',
  qm_theory='mace',   ml_keywords='--model MACE-OMOL-0',
  qm_theory='aimnet2', ml_keywords='--model aimnet2',
  1. Run with amber-mlips and standard sander-like flags.
amber-mlips -O \
  -i mlmm.in -o mlmm.out \
  -p leap.parm7 -c md.rst7 \
  -r mlmm.rst7 -x mlmm.nc -inf mlmm.info

Point-Charge Embedding Correction (xTB)

--embedcharge adds an xTB-based correction for electrostatic embedding of MM point charges into the QM region.

Install xTB (if not already installed in Quick Start step 1):

conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"

Use --embedcharge in ml_keywords:

  ml_keywords='--model uma-s-1p1 --embedcharge',

This computes dE = E_xTB(embed) - E_xTB(no-embed) and adds the correction to MLIP energy and forces.

MM MPI Parallelism

The ML evaluation path is always single-process. The MM side (sander) can use MPI:

amber-mlips --mm-ranks 16 -O -i mlmm.in -o mlmm.out -p leap.parm7 -c md.rst7 -r mlmm.rst7
  • --mm-ranks 1 (default): runs sander directly.
  • --mm-ranks > 1: uses mpirun/mpiexec + sander.MPI. Requires AmberTools built with MPI support.

Note: AMBER 24 (and earlier) has a bug in qm2_extern_module.F90 that corrupts forces in multi-rank EXTERN runs. Use AmberTools 25 or later for --mm-ranks > 1.

Installing Model Families

pip install "amber-mlips[uma]"         # UMA (default)
pip install "amber-mlips[orb]"         # ORB
pip install "amber-mlips[mace]"        # MACE
pip install "amber-mlips[aimnet2]"     # AIMNet2
pip install amber-mlips                # core only (no ML backend)

Note: UMA and MACE have a dependency conflict (e3nn). Use separate environments.

Local install:

git clone https://github.com/t-0hmura/amber-mlips.git
cd amber-mlips
pip install -e ".[uma]"

Model download notes:

  • UMA: Hosted on Hugging Face Hub. Run huggingface-cli login once.
  • ORB / MACE / AIMNet2: Downloaded automatically on first use.

Examples

Ready-to-run examples are in the examples/ directory with a protein-ligand system (1IL4, 50,387 atoms, 115 QM atoms).

File Backend Description
uma.in UMA uma-s-1p1
orb.in ORB orb-v3-conservative-omol
mace.in MACE MACE-OMOL-0
aimnet2.in AIMNet2 aimnet2
uma_embedcharge.in UMA uma-s-1p1 + xTB embedcharge

UMA, ORB, and AIMNet2 can share one environment; MACE requires a separate one (see Installing Model Families). Run the example matching your installed backend:

cd examples
amber-mlips -O -i uma.in -o uma.out -p leap.parm7 -c md.rst7 -r uma.rst7

Performance Reference

Benchmark on a protein-ligand system (1IL4, 50,387 atoms, 115 ML-region atoms):

UMA UMA + embedcharge
Model uma-s-1p1 uma-s-1p1 --embedcharge
Total atoms 50,387 50,387
ML region atoms 115 115
dt 0.0005 ps 0.0005 ps
Per step ~135 ms ~579 ms
Speed ~321 ps/day ~75 ps/day

Environment: AMD Ryzen 7950X3D / 4.20 GHz (32 threads) + RTX 5080 (VRAM 16 GB), RAM 128 GB. --mm-ranks 16 used for MM MPI parallelism.

Upstream Model Sources

Advanced Options

See OPTIONS.md for all wrapper and backend-specific options. For internal architecture details, see TECHNICAL_NOTE.md.

Troubleshooting

  • amber-mlips command not found — Activate the conda/venv environment where the package is installed.
  • sander not found — Install AmberTools (conda install ambertools-dac=25), or use --sander-bin /path/to/sander.
  • UMA model download fails (401/403) — Run huggingface-cli login. Some models require access approval on Hugging Face.
  • MPI errors with --mm-ranks > 1 — Ensure mpirun/mpiexec is available. Use --mpi-bin to specify explicitly.
  • Works interactively but fails in batch jobs — Use --sander-bin with an absolute path.

References

Citation

If you use this package, please cite:

@software{ohmura2026ambermlips,
  author       = {Ohmura, Takuto},
  title        = {amber-mlips},
  year         = {2026},
  version      = {1.0.0},
  url          = {https://github.com/t-0hmura/amber-mlips},
  license      = {MIT},
  doi          = {10.5281/zenodo.18871123}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

amber_mlips-1.0.0.tar.gz (3.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

amber_mlips-1.0.0-py3-none-any.whl (47.8 kB view details)

Uploaded Python 3

File details

Details for the file amber_mlips-1.0.0.tar.gz.

File metadata

  • Download URL: amber_mlips-1.0.0.tar.gz
  • Upload date:
  • Size: 3.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for amber_mlips-1.0.0.tar.gz
Algorithm Hash digest
SHA256 0c417fee55a6c07d483b6d7337c5d3b88f2cfd1ff3a43c43a1a7cce670cdbc66
MD5 55c58ca5ca872566c39de56d714d3c2a
BLAKE2b-256 dc5451467b3b52619cc5e69e4eb9822f4a7f6771c7f8e77f654c5102eae4b340

See more details on using hashes here.

Provenance

The following attestation bundles were made for amber_mlips-1.0.0.tar.gz:

Publisher: release.yml on t-0hmura/amber-mlips

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file amber_mlips-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: amber_mlips-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 47.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for amber_mlips-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b5b07008506234cf00ad86af32eae5d9a64cf88979a38c45195b77bdf6dec8b8
MD5 8abdbc1acd2fd9f91f690efc092efe9c
BLAKE2b-256 d5b358675f2b279265cd638563f6b2e1d8106f2bc1656881fd68ed1aca971666

See more details on using hashes here.

Provenance

The following attestation bundles were made for amber_mlips-1.0.0-py3-none-any.whl:

Publisher: release.yml on t-0hmura/amber-mlips

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page