Skip to main content

Automated and Parallel Library for Neural Operators Research

Project description

HyperNOs Documentation

Introduction

HyperNOs is a Python project focused on the implementation of completely automatic, distributed and parallel neural operators hyperparameter optimization. The project aims to provide a framework for training neural operator models using Pytorch and Ray Tune for hyperparameter tuning. The library is designed to be highly flexible, making it easy to use with any kind of model and dataset. In the context of Neural Operators, where architecture design is still an active area of research, performing extensive hyperparameter optimization is crucial to obtain state-of-the-art results.

For a more detailed explanation of the library and its capabilities, please refer to our article: HyperNOs: Automated and Parallel Library for Neural Operators Research.

Supported Libraries

HyperNOs allows users to easily integrate and use models from popular neural-operator libraries or custom models. Is also very flexible and can be used with many different datasets.

I already implemented some examples of usage with the following popular libraries:

  • NeuralOperator: Implement neural operator architectures like FNO, SFNO, TFNO, UNO, UQNO, GINO, FNOGNO, LocalNO, RNO, CODANO, OTNO.
  • DeepXDE: Implement operator learning models like DeepONet, MIONet, POD-DeepONet, POD-MIONet.

You can find examples of how to use these models in the hypernos/examples directory with two dedicated subdirectories: deepxde_lib and neuralop_lib. There are implemented examples both for training a given architecture and for hyperparameter optimization routines.

Visualization website

The project also includes a visualization website: (https://hypernos.streamlit.app) that allows users to visualize the results obtained with HyperNOs library.

Installation

To set up the HyperNOs project, follow these steps:

  1. Clone the repository:

    git clone --depth=1 https://github.com/MaxGhi8/HyperNOs.git
    cd HyperNOs
    
  2. Install the required dependencies. It is recommended to create a virtual environment before installing the dependencies; I personally use pyenv (but others, like uv, are fine) and Python version 3.12.7 for this purpose:

    pyenv install 3.12.7
    pyenv virtualenv 3.12.7 hypernos
    pyenv activate hypernos
    

    Install in editable mode

    pip install -e .

    
    Installing the package makes it available as `hypernos` across your system and sets up the `hypernos-run` command.
    
    > [!WARNING]
    > For PyTorch, more attention may be needed during installation. We highly recommend following the [official documentation](https://pytorch.org/get-started/locally/) to install the correct version for your system (e.g., matching your CUDA version).
    
    
  3. Download the dataset using the download_data.sh script:

    ./download_data.sh
    

    [!WARNING] Only for Windows I recommend to install WSL. Then open the WSL terminal and navigate where you have installed the HyperNOs library

    cd /mnt/c/Users/<your_user>/<your_path_to_HyperNOs>
    

    and then try to run the program with ./download_data.sh if you get an error like /bin/bash^M: bad interpreter. No such file or directory this can be due to CR and LF in Windows. In this case try to run the following line and then rerun the program.

    sed -i -e 's/\r$//' download_data.sh
    ./download_data.sh
    
  4. Download pre-trained models (optional):

     git clone --depth=1 https://github.com/MaxGhi8/tests
    

    The previous repository contains the Tensorboard support for every model, the information about the training and the architecture's hyperparameters chosen. Then you can download running the following script and select the model that you want to download:

    ./download_trained_model.sh
    

    [!WARNING] As before, for Windows, if you are on WSL and get the error /bin/bash^M: bad interpreter. No such file or directory try to run sed -i -e 's/\r$//' download_trained_model.sh and then rerun the script ./download_trained_model.sh.

Usage

After installation, you can run the provided examples in the neural_operators/examples directory.

Interactive Tutorials

We provide interactive Jupyter Notebooks in the notebook/ directory to help you get started:

Basic Training

To train a model (e.g., FNO) on a single machine, simply run the corresponding python script:

cd neural_operators/examples/
python train_fno.py

Python API

You can now import HyperNOs directly in your own scripts:

import torch
from hypernos.examples.train_fno import train_fno

# Run a training session
train_fno("poisson", "best", "L2")

Hyperparameter Optimization with Ray Tune

You can use Ray Tune to optimize hyperparameters.

Local Machine

To run Ray Tune on your local machine, first start a Ray head node:

ray start --head

Then run the Ray script:

cd neural_operators/examples/
python ray_fno.py

Cluster (Slurm)

For running on a cluster using Slurm, refer to SLURM_USAGE.md for instructions on using template.slurm.

Citation

If you use our library please consider citing our paper:

@misc{ghiotto2025hypernosautomatedparallellibrary,
      title={HyperNOs: Automated and Parallel Library for Neural Operators Research},
      author={Massimiliano Ghiotto},
      year={2025},
      eprint={2503.18087},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2503.18087},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hypernos-0.1.0.tar.gz (38.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hypernos-0.1.0-py3-none-any.whl (37.5 kB view details)

Uploaded Python 3

File details

Details for the file hypernos-0.1.0.tar.gz.

File metadata

  • Download URL: hypernos-0.1.0.tar.gz
  • Upload date:
  • Size: 38.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hypernos-0.1.0.tar.gz
Algorithm Hash digest
SHA256 9de9651388c3025537d2bf0f32936fc5cbf834fcec19e1fd14f08a8e4aee6f4f
MD5 a757985f750acf9dd1402e95273dee8d
BLAKE2b-256 75d1cf36571e994e65b231ae68eff6b687afb8e8092bb588c36d433636f28c91

See more details on using hashes here.

Provenance

The following attestation bundles were made for hypernos-0.1.0.tar.gz:

Publisher: publish.yml on MaxGhi8/HyperNOs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hypernos-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: hypernos-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 37.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for hypernos-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d93b89163b5373a2346eea8446ad12fafa14411c001eaf27341120a513d2a4d6
MD5 6aeb1feec59c7a9bd078762051316ad7
BLAKE2b-256 13cdcb2e4f78f85f67cf19581292cb7290a9d28c263cee2599a76f25df1b296d

See more details on using hashes here.

Provenance

The following attestation bundles were made for hypernos-0.1.0-py3-none-any.whl:

Publisher: publish.yml on MaxGhi8/HyperNOs

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page