Skip to main content

MLIP plugins for Gaussian16 External (UMA, ORB, MACE, AIMNet2)

Project description

g16-mlips

MLIP (Machine Learning Interatomic Potential) plugins for Gaussian 16 External interface.

Four model families are currently supported:

  • UMA (fairchem) — default model: uma-s-1p1
  • ORB (orb-models) — default model: orb_v3_conservative_omol
  • MACE (mace) — default model: MACE-OMOL-0
  • AIMNet2 (aimnetcentral) — default model: aimnet2

All backends provide energy, gradient, and analytical Hessian to Gaussian 16.

The model server starts automatically and stays resident, so repeated calls during optimization are fast.

Requires Python 3.9 or later.

Quick Start (Default = UMA)

  1. Install PyTorch for your environment (CUDA/CPU).
pip install torch==2.8.0 --index-url https://download.pytorch.org/whl/cu129
  1. Install the package with UMA profile. If you need ORB/MACE/AIMNet2, use g16-mlips[orb]/g16-mlips[mace]/g16-mlips[aimnet2].
pip install "g16-mlips[uma]"
  1. Log in to Hugging Face for UMA model access. (Not required for ORB/MACE/AIMNet2)
huggingface-cli login
  1. Use in a Gaussian input file. nomicro is required. If you use ORB/MACE/AIMNet2, use external="orb"/external="mace"/external="aimnet2". For detailed Gaussian External usage, see https://gaussian.com/external/
%nprocshared=8
%mem=32GB
%chk=water_ext.chk
#p external="uma" opt(nomicro)

Water external UMA example

0 1
O  0.000000  0.000000  0.000000
H  0.758602  0.000000  0.504284
H -0.758602  0.000000  0.504284

Other backends:

#p external="orb" opt(nomicro)
#p external="mace" opt(nomicro)
#p external="aimnet2" opt(nomicro)

Important: For Gaussian External geometry optimization, always include nomicro in opt(...). Without it, Gaussian uses micro-iterations that assume an internal gradient routine, which is incompatible with the external interface.

Analytical Hessian (optional)

Optimization and IRC can run without providing an initial Hessian — Gaussian builds one internally using estimated force constants. Providing an MLIP analytical Hessian via freq + readfc improves convergence, especially for TS searches.

Gaussian freq (with external=...) is the only path that requests the plugin's analytical Hessian directly.

Frequency calculation

%nprocshared=8
%mem=32GB
%chk=cla_ext.chk
#p external="uma" freq

CLA freq UMA

0 1
...

Gaussian sends igrd=2 and stores the result in the .chk file.

Using analytical Hessian in optimization jobs

To use MLIP analytical Hessian in opt/irc, read the Hessian from an existing checkpoint using Gaussian %oldchk + readfc.

%nprocshared=8
%mem=32GB
%chk=cla_ext.chk
%oldchk=cla_ext.chk

#p external="uma" opt(readfc,nomicro)

CLA opt UMA

0 1
...

readfc reads the force constants from %oldchk. This applies to opt and irc runs. Note that freq is the only job type that requests analytical Hessian (igrd=2) from the plugin. opt and irc themselves never request it directly.

Note: Run uma --list-models to see available models. If the uma alias conflicts in your environment, use g16-mlips-uma instead.

Additional examples: examples/cla_freq_uma.gjf + examples/cla_uma.gjf, examples/sn2_freq_orb.gjf + examples/sn2_orb.gjf, examples/water_freq_mace.gjf + examples/water_mace.gjf, examples/cla_freq_aimnet2.gjf + examples/cla_aimnet2.gjf

Installing Model Families

pip install "g16-mlips[uma]"         # UMA (default)
pip install "g16-mlips[orb]"         # ORB
pip install "g16-mlips[mace]"        # MACE
pip install "g16-mlips[orb,mace]"    # ORB + MACE
pip install "g16-mlips[aimnet2]"     # AIMNet2
pip install "g16-mlips[orb,mace,aimnet2]"  # ORB + MACE + AIMNet2
pip install g16-mlips                # core only

Note: UMA and MACE conflict at dependency level (e3nn). Use separate environments.

Local install:

git clone https://github.com/t-0hmura/g16-mlips.git
cd g16-mlips
pip install ".[uma]"

Model download notes:

  • UMA: Hosted on Hugging Face Hub. Run huggingface-cli login once.
  • ORB / MACE / AIMNet2: Downloaded automatically on first use.

Upstream Model Sources

Advanced Options

See OPTIONS.md for backend-specific tuning parameters.

Command aliases:

  • Short: uma, orb, mace, aimnet2
  • Prefixed: g16-mlips-uma, g16-mlips-orb, g16-mlips-mace, g16-mlips-aimnet2

Troubleshooting

  • external="uma" runs the wrong plugin — Use external="g16-mlips-uma" to avoid alias conflicts.
  • external="aimnet2" runs the wrong plugin — Use external="g16-mlips-aimnet2" to avoid alias conflicts.
  • uma command not found — Activate the conda environment where the package is installed.
  • UMA model download fails (401/403) — Run huggingface-cli login. Some models require access approval on Hugging Face.
  • Works interactively but fails in PBS jobs — Use absolute path from which uma in the Gaussian input.

Citation

If you use this package, please cite:

@software{ohmura2026g16mlips,
  author       = {Ohmura, Takuto},
  title        = {g16-mlips},
  year         = {2026},
  month        = {2},
  version      = {1.0.0},
  url          = {https://github.com/t-0hmura/g16-mlips},
  license      = {MIT},
  doi          = {10.5281/zenodo.18691993}
}

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

g16_mlips-1.0.0.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

g16_mlips-1.0.0-py3-none-any.whl (27.9 kB view details)

Uploaded Python 3

File details

Details for the file g16_mlips-1.0.0.tar.gz.

File metadata

  • Download URL: g16_mlips-1.0.0.tar.gz
  • Upload date:
  • Size: 28.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for g16_mlips-1.0.0.tar.gz
Algorithm Hash digest
SHA256 e42a60e80496eee434b2ca94406f76de92d158a27222ed9c2d504a6dce25e520
MD5 612eabf3be165892d74deb8622a53598
BLAKE2b-256 3595baef7125ed38c821af7215e5992f34c68795c01e569c9737907d1088623f

See more details on using hashes here.

Provenance

The following attestation bundles were made for g16_mlips-1.0.0.tar.gz:

Publisher: release.yml on t-0hmura/g16-mlips

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file g16_mlips-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: g16_mlips-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 27.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for g16_mlips-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1fdd297c3a696f5229e817c04ab4871dafb7031f22fd4e7ea4368afeb0a4c4e9
MD5 41714e23147a9ade27705f6e63e39d79
BLAKE2b-256 f99e6795632f0444d4142e0d6f859e6b45cab6166a65447ef7c020345bbcb43e

See more details on using hashes here.

Provenance

The following attestation bundles were made for g16_mlips-1.0.0-py3-none-any.whl:

Publisher: release.yml on t-0hmura/g16-mlips

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page