MLIP wrapper for AMBER QM/MM (UMA, ORB, MACE, AIMNet2)
Project description
amber-mlips
MLIP (Machine Learning Interatomic Potential) wrapper for AMBER QM/MM via sander EXTERN interface.
Four model families are currently supported:
- UMA (fairchem) — default model:
uma-s-1p1 - ORB (orb-models) — default model:
orb-v3-conservative-omol - MACE (mace) — default model:
MACE-OMOL-0 - AIMNet2 (aimnetcentral) — default model:
aimnet2
All backends provide energy and gradient for AMBER QM/MM molecular dynamics and optimization.
An optional point-charge embedding correction with xTB is available via --embedcharge.
Requires Python 3.9 or later and AmberTools (sander).
AmberTools is free of charge (GNU GPL); sander / sander.MPI are LGPL 2.1.
Quick Start (Default = UMA)
- (Optional) Install AmberTools if not already installed. AmberTools25 or later is recommended.
conda config --add channels conda-forge
conda config --add channels dacase
conda config --set channel_priority strict
conda install ambertools-dac=25
The conda package includes sander, sander.MPI (OpenMPI), and requires Python 3.12.
- (Optional) Install xTB. Only needed for
--embedcharge.
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"
The libblas/liblapack specs prevent the BLAS library from being replaced with the slower netlib. See TECHNICAL_NOTE.md for details.
To build xTB from source (required for CPCM-X solvation via --solvent-model cpcmx):
git clone --depth 1 https://github.com/grimme-lab/xtb.git
cd xtb
cmake -B build -S . \
-DCMAKE_BUILD_TYPE=Release \
-DWITH_CPCMX=ON \
-DBLAS_LIBRARIES=/path/to/libblas.so \
-DLAPACK_LIBRARIES=/path/to/liblapack.so
make -C build tblite-lib -j8 # build tblite first to avoid a parallel build race
make -C build xtb-exe -j8
The built binary is at build/xtb. Add it to your PATH or use --xtb-cmd /path/to/build/xtb.
For CPCM-X, set CPXHOME to the CPCM-X source directory (e.g., build/_deps/cpcmx-src/).
Requires GCC >= 10 (gfortran 8 causes internal compiler errors).
See also: https://github.com/grimme-lab/xtb, https://github.com/grimme-lab/CPCM-X
- Install PyTorch suitable for your CUDA environment.
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
- Install the package with the UMA backend. For ORB/MACE/AIMNet2, replace
umaaccordingly.
pip install "amber-mlips[uma]"
- Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
huggingface-cli login
UMA model is on Hugging Face Hub. You need to log in once (See https://github.com/facebookresearch/fairchem):
- Prepare an AMBER input file. Only
qm_theoryandml_keywordsare plugin-specific; everything else is native AMBER&qmmm. For examples, see inputs in examples/*.in.
&cntrl
imin=0, irest=0, ntx=1,
nstlim=1000, dt=0.001,
ntb=0, ntt=3, gamma_ln=5.0,
ntpr=10, ntwx=10, ntwr=100,
ifqnt=1,
/
&qmmm
qmmask=':2',
qmcharge=0,
spin=1,
qm_theory='uma',
ml_keywords='--model uma-s-1p1',
qmcut=12.0,
qmshake=0,
/
Other backends:
qm_theory='orb', ml_keywords='--model orb-v3-conservative-omol',
qm_theory='mace', ml_keywords='--model MACE-OMOL-0',
qm_theory='aimnet2', ml_keywords='--model aimnet2',
- Run with
amber-mlipsand standardsander-like flags.
amber-mlips -O \
-i mlmm.in -o mlmm.out \
-p leap.parm7 -c md.rst7 \
-r mlmm.rst7 -x mlmm.nc -inf mlmm.info
Point-Charge Embedding Correction (xTB)
--embedcharge adds an xTB-based correction for electrostatic embedding of MM point charges into the QM region.
Install xTB (if not already installed in Quick Start step 1):
conda install xtb "libblas=*=*openblas" "liblapack=*=*openblas"
Use --embedcharge in ml_keywords:
ml_keywords='--model uma-s-1p1 --embedcharge',
This computes dE = E_xTB(embed) - E_xTB(no-embed) and adds the correction to MLIP energy and forces.
ML-Only MD (Full-System MLIP)
See the ML-Only MD section in OPTIONS.md for full-system MLIP molecular dynamics (qmmask='@*') with implicit solvent (non-periodic only).
MM MPI Parallelism
The ML evaluation path is always single-process. The MM side (sander) can use MPI:
amber-mlips --mm-ranks 16 -O -i mlmm.in -o mlmm.out -p leap.parm7 -c md.rst7 -r mlmm.rst7
--mm-ranks 1(default): runssanderdirectly.--mm-ranks > 1: usesmpirun/mpiexec+sander.MPI. Requires AmberTools built with MPI support.
Note: AMBER 24 (and earlier) has a bug in
qm2_extern_module.F90that corrupts forces in multi-rank EXTERN runs. Use AmberTools 25 or later for--mm-ranks > 1.
Also place--mm-ranksbetweenamber-mlipsand-O(e.g.,amber-mlips --mm-ranks 16 -O ...).
Installing Model Families
pip install "amber-mlips[uma]" # UMA (default)
pip install "amber-mlips[orb]" # ORB
pip install "amber-mlips[mace]" # MACE
pip install "amber-mlips[aimnet2]" # AIMNet2
pip install amber-mlips # core only (no ML backend)
Note: UMA and MACE have a dependency conflict (
e3nn). Use separate environments.
Local install:
git clone https://github.com/t-0hmura/amber-mlips.git
cd amber-mlips
pip install -e ".[uma]"
Model download notes:
- UMA: Hosted on Hugging Face Hub. Run
huggingface-cli loginonce. - ORB / MACE / AIMNet2: Downloaded automatically on first use.
Examples
Ready-to-run examples are in the examples/ directory with a protein-ligand system (1IL4, 50,387 atoms, 115 QM atoms).
| File | Backend | Description |
|---|---|---|
uma.in |
UMA | uma-s-1p1 |
orb.in |
ORB | orb-v3-conservative-omol |
mace.in |
MACE | MACE-OMOL-0 |
aimnet2.in |
AIMNet2 | aimnet2 |
uma_embedcharge.in |
UMA | uma-s-1p1 + xTB embedcharge |
uma_mlonly_implicit.in |
UMA | ML-only + xTB implicit solvent (non-periodic, ALPB) |
UMA, ORB, and AIMNet2 can share one environment; MACE requires a separate one (see Installing Model Families). Run the example matching your installed backend:
cd examples
amber-mlips --mm-ranks 16 -O -i uma.in -o uma.out -p leap.parm7 -c md.rst7 -r uma.rst7
Performance Reference
Benchmark on a protein-ligand system (1IL4, 50,387 atoms, 115 ML-region atoms):
| UMA | UMA + embedcharge | |
|---|---|---|
| Model | uma-s-1p1 |
uma-s-1p1 --embedcharge |
| Total atoms | 50,387 | 50,387 |
| ML region atoms | 115 | 115 |
| dt | 0.0005 ps | 0.0005 ps |
| Per step | ~135 ms | ~579 ms |
| Speed | ~321 ps/day | ~75 ps/day |
Environment: AMD Ryzen 7950X3D / 4.20 GHz (32 threads) + RTX 5080 (VRAM 16 GB), RAM 128 GB.
--mm-ranks 16 used for MM MPI parallelism.
Upstream Model Sources
- UMA / FAIR-Chem: https://github.com/facebookresearch/fairchem
- ORB / orb-models: https://github.com/orbital-materials/orb-models
- MACE: https://github.com/ACEsuit/mace
- AIMNet2: https://github.com/isayevlab/aimnetcentral
Advanced Options
See OPTIONS.md for all wrapper and backend-specific options.
For internal architecture details, see TECHNICAL_NOTE.md.
Troubleshooting
amber-mlipscommand not found — Activate the conda/venv environment where the package is installed.sandernot found — Install AmberTools (conda install ambertools-dac=25), or use--sander-bin /path/to/sander.- UMA model download fails (401/403) — Run
huggingface-cli login. Some models require access approval on Hugging Face. - MPI errors with
--mm-ranks > 1— Ensurempirun/mpiexecis available. Use--mpi-binto specify explicitly. - Works interactively but fails in batch jobs — Use
--sander-binwith an absolute path.
References
- AMBER24 manual (detailed MD settings): https://ambermd.org/doc12/Amber24.pdf
Citation
If you use this package, please cite:
@software{ohmura2026ambermlips,
author = {Ohmura, Takuto},
title = {amber-mlips},
year = {2026},
version = {1.1.1},
url = {https://github.com/t-0hmura/amber-mlips},
license = {MIT},
doi = {10.5281/zenodo.18942483}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file amber_mlips-1.2.1.tar.gz.
File metadata
- Download URL: amber_mlips-1.2.1.tar.gz
- Upload date:
- Size: 3.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
127d9e52a3348ea8cc70934ba6c708a076a7e858e26ab71b776f967093c132c4
|
|
| MD5 |
825142c884bf90d03bf9362c6ae2d670
|
|
| BLAKE2b-256 |
0d157402ec107de66ffa6ed4d661c915821ae835af9349c6dc4d935e53b91c63
|
Provenance
The following attestation bundles were made for amber_mlips-1.2.1.tar.gz:
Publisher:
release.yml on t-0hmura/amber-mlips
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
amber_mlips-1.2.1.tar.gz -
Subject digest:
127d9e52a3348ea8cc70934ba6c708a076a7e858e26ab71b776f967093c132c4 - Sigstore transparency entry: 1167570057
- Sigstore integration time:
-
Permalink:
t-0hmura/amber-mlips@6ece48d7c79c170bf309a1da00ad46d366cf4f90 -
Branch / Tag:
refs/tags/v1.2.1 - Owner: https://github.com/t-0hmura
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6ece48d7c79c170bf309a1da00ad46d366cf4f90 -
Trigger Event:
release
-
Statement type:
File details
Details for the file amber_mlips-1.2.1-py3-none-any.whl.
File metadata
- Download URL: amber_mlips-1.2.1-py3-none-any.whl
- Upload date:
- Size: 56.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3ae38f82b092990b642c183a12c73a061586b7b71298eb6d4be3d980284b2c8a
|
|
| MD5 |
650ae862dff27bb6ec1e9dbc3ceacb3c
|
|
| BLAKE2b-256 |
42236fe9fb654e4d4e24c54ba2e8a3b4f2cf5eba1827ad9d4c8430da139102e8
|
Provenance
The following attestation bundles were made for amber_mlips-1.2.1-py3-none-any.whl:
Publisher:
release.yml on t-0hmura/amber-mlips
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
amber_mlips-1.2.1-py3-none-any.whl -
Subject digest:
3ae38f82b092990b642c183a12c73a061586b7b71298eb6d4be3d980284b2c8a - Sigstore transparency entry: 1167570279
- Sigstore integration time:
-
Permalink:
t-0hmura/amber-mlips@6ece48d7c79c170bf309a1da00ad46d366cf4f90 -
Branch / Tag:
refs/tags/v1.2.1 - Owner: https://github.com/t-0hmura
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@6ece48d7c79c170bf309a1da00ad46d366cf4f90 -
Trigger Event:
release
-
Statement type: