Skip to main content

Neural network surrogate of TGLF (weights only)

Project description

tglfnn-ukaea

Neural network surrogate models of the TGLF quasilinear plasma turbulent transport simulator in various parameter spaces.

Paper for acknowledgment

If you use these models within your work, we request you cite the following paper:

Usage

Various different methods exist for using the models:

  1. Loading from PyTorch checkpoint
  2. Loading traced ONNX model
  3. Loading traced TorchScript model
  4. Loading the parameters directly into pure Python

Loading the traced TorchScript model allows the model to be used in Fortran (see below). Loading the parameters directly is a minimal-dependency method designed for use with other machine learning frameworks.

1. Loading from PyTorch checkpoint

import torch

# Load the model
efe_gb_model = torch.load('MultiMachineHyper_1Aug25/regressor_efe_gb.pt')

# Call the model
input_tensor = torch.tensor([[...]], dtype=torch.float32)  # Replace with appropriate input
output_tensor = efe_gb_model(input_tensor)

2. Loading traced ONNX model

import onnxruntime as ort

# Load the model
ort_session = ort.InferenceSession('MultiMachineHyper_1Aug25/regressor_efe_gb.onnx')

# Call the model
input_tensor = np.array([[...]], dtype=np.float32)  # Replace with appropriate input
outputs = ort_session.run(None, {'input': input_tensor})

3. Loading traced TorchScript model

import torch

# Load the model
torchscript_model = torch.jit.load('MultiMachineHyper_1Aug25/regressor_efe_gb_torchscript.pt')

# Call the model
input_tensor = torch.tensor([[...]], dtype=torch.float32)  # Replace with appropriate input
output_tensor = torchscript_model(input_tensor)

Using the traced TorchScript models in Fortran

The traced PyTorch models can be used in Fortran with FTorch, which provides Fortran bindings for LibTorch (the C++ backend of PyTorch). Please cite the Ftorch publication if using these models from Fortran.

Further details on the FTorch Implementation of these networks can be found in a related project.

Prerequisites

  • LibTorch: Download the appropriate version (CPU or GPU) from the PyTorch website and ensure it is accessible in your environment. CPU versions of the LibTorch and Pip packages have been tested. The LibTorch version requires no Python to install or run. It is suggested to look at the FTorch instructions below first.
  • FTorch: Install the FTorch library following the instructions in the FTorch repository. This also provides a compiler specific module (ftorch.mod).
  • Fortran Compiler: Use a modern Fortran compiler (e.g., gfortran or ifort) compatible with FTorch.
  • CMake: Version >= 3.1 required to build FTorch. Not essential, but helpful for building final Fortran code.

4. Loading the parameters directly into pure Python

The parameters are distributed using the tglfnn_ukaea python package.

Usage:

$ pip install tglfnn_ukaea
$ python
>>> import tglfnn_ukaea
>>> tglfnn_ukaea.loader.load("multimachine")
{
    "stats":  {...}, # Used for normalisation
    "config": {...}, # Network architecture
    "input_labels": (...), # Input feature names (ordered)
    "params": { # Weights and biases
        "efe_gb": {...},
        "efi_gb": {...},
        "pfi_gb": {...},
    },
}

The returned dictionary contains all the information needed to implement the neural network in any framework. See google-deepmind/fusion_surrogates for an example.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tglfnn_ukaea-0.1.0.tar.gz (59.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tglfnn_ukaea-0.1.0-py3-none-any.whl (59.5 MB view details)

Uploaded Python 3

File details

Details for the file tglfnn_ukaea-0.1.0.tar.gz.

File metadata

  • Download URL: tglfnn_ukaea-0.1.0.tar.gz
  • Upload date:
  • Size: 59.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for tglfnn_ukaea-0.1.0.tar.gz
Algorithm Hash digest
SHA256 1253b135301012af18a8acf30fcd73ffdafd4a63efcf26b9cda1a53a0d066a52
MD5 be460f4ef209baac0abc6cc3d373f795
BLAKE2b-256 fdcacdc6e4fc4e05f9f8987c6aa7a6b313db8ed434dd83507bf2d54feaac1b14

See more details on using hashes here.

File details

Details for the file tglfnn_ukaea-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: tglfnn_ukaea-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 59.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for tglfnn_ukaea-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a11faf0180a5a1c4881e14cb7fa751052a66b9132b08fb7789e2448cef4e8606
MD5 c944e41e866c4ec1fcd73dac8d5694e9
BLAKE2b-256 f7c677e438181eeeeb409c81de7b6feeefb3da434bd79724a7bb4544a2620534

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page