Skip to main content

LIPS : Learning Industrial Physical Simulation benchmark suite

Project description

LIPS - Learning Industrial Physical Simulation benchmark suite

This repository implements LIPS benchmarking platform.

Paper: https://openreview.net/pdf?id=ObD_o92z4p

The Readme file is organized as follows:

Introduction

Physical simulations are at the core of many critical industrial systems. However, today's physical simulators have some limitations such as computation time, dealing with missing or uncertain data, or even non-convergence for some feasible cases. Recently, the use of data-driven approaches to learn complex physical simulations has been considered as a promising approach to address those issues. However, this comes often at the cost of some accuracy which may hinder the industrial use.

What is LIPS

To drive the above mentioned new research topic towards a better real-world applicability, we propose a new benchmark suite "Learning Industrial Physical Simulations" (LIPS) to meet the need of developing efficient, industrial application-oriented, augmented simulators. The proposed benchmark suite is a modular and configurable framework that can deal with different physical problems. To do so, as it is depicted in the scheme, the LIPS platform is designed to be modular and includes following modules:

  • Data: This module may be used to import the required datasets or to generate some synthetic data using physical solvers (NB. depending on the use case, the physical solver may not yet be avaiable);

  • Augmented Simulator: This module offers a list of already implemented data-driven models which could be used to augment or to substitute the physical solvers. The datasets imported using Data module may be used to learn these models. A set of instructions are also provided in Contribution section and in related jupyter notebooks (see more details in Getting started) for who would like to implement their own augmented simulator and evaluate its performance using LIPS;

  • Benchmark configurator: This module takes a dataset related to a specific task and usecase, an already trained augmented simulator (aka model) and a set of metrics and call the evaluator module to assess the performance of the model;

  • Evaluator: This module is responsible to evaluate the performance of a selected benchmark. To define how to assess such benchmark performance, we propose a set of four generic categories of criteria:

    • ML-related metrics: Among classical ML metrics, we focus on the trade-offs of typical model accuracy metrics such as Mean Absolute Error (MAE) vs computation time (optimal ML inference time without batch size consideration as opposed to application time later);

    • Physics compliance: Physical laws compliance is decisive when simulation results are used to make consistent real-world decisions. Depending on the expected level of criticality of the benchmark, this criterion aims at determining the type and number of physical laws that should be satisfied;

    • Industrial readiness:* When deploying a model in real-world applications, it should consider the real data availability and scale-up to large systems. We hence consider:

      1. Scalability: the computational complexity of a surrogate method should scale well with respect to the problem size, e.g. number of nodes in power grid, mesh refinement level in pneumatic;
      2. Application Time: as we are looking for a model tailored to a specific application, we measure the computation time when integrated in this application. To this end, we define a realistic application-dependent batch size, which may affect the speed-up.
    • Application-based out-of-distribution (OOD) generalization: For industrial physical simulation, there is always some expectation to extrapolate over minimal variations of the problem geometry depending on the application. We hence consider ood geometry evaluation such as unseen power grid topology or unseen pneumatic mesh variations.

Scheme

Associated results

To demonstrate this ability, we propose in this paper to investigate two distinct use-cases with different physical simulations, namely: Power Grids, Pneumatics and Air Foils. For each use case, several benchmarks (aka tasks or scenarios) may be described and assessed with existing models. In the figure below, we show an example of the results obtained for a specific task associated with each use case. To ease the reading of the numerical comparison table, the performances are reported using three colors computed on the basis of two thresholds. The meaning of colors is described below:

  • $\color{green}Greate$: Designates the best performance that could be obtained for a metric and an associated variable. The metric value should be lower than the first threshold;
  • $\color{orange}Acceptable$: Designates an acceptable performance that could be obtained for a metric and an associated variable. The metric value should be between the first and the second thresholds;
  • $\color{red}Not\ acceptable$: Designates a not acceptable performance using a metric and an associated variable. The metric value should higher than the second threshold.

The number of circles corresponds to the number of variables or laws that are evaluated.

As it can be seen, none of the models perform well under all expected criteria, inviting the community to develop new industry-applicable solutions and possibly showcase their performance publicly upon online LIPS instance on Codabench.

Results

The final score is computed on the basis of the obtained results for each metric (more information concerning the scoring algorithm will be provided).

Usage example

Instantiate a benchmark

The paths should correctly point-out to generated data (DATA_PATH) and benchmark associated config file (CONFIG_PATH). The log path (LOG_PATH) could be set by the user.

from lips.benchmark import PowerGridBenchmark

benchmark1 = PowerGridBenchmark(benchmark_name="Benchmark1",
                                benchmark_path=DATA_PATH,
                                load_data_set=True,
                                log_path=LOG_PATH,
                                config_path=CONFIG_PATH
                               )

Train a simulator

A simulator (based on tensorflow) could be instantiated and trained if required easily as follows:

from lips.augmented_simulators.tensorflow_models import TfFullyConnected
from lips.dataset.scaler import StandardScaler

tf_fc = TfFullyConnected(name="tf_fc",
                         bench_config_name="Benchmark1",
                         scaler=StandardScaler,
                         log_path=LOG_PATH)

tf_fc.train(train_dataset=benchmark1.train_dataset,
            val_dataset=benchmark1.val_dataset,
            epochs=100
           )

For each architecture a config file is attached which are available here for powergrid use case.

Reproducibility and evaluation

The following script show how to use the evaluation capacity of the platform to reproduce the results on all the datasets. A config file (see here for powergrid use case) is associated with this benchmark and all the required evaluation criteria can be set in this configuration file.

tf_fc_metrics = benchmark1.evaluate_simulator(augmented_simulator=tf_fc,
                                              eval_batch_size=128,
                                              dataset="all",
                                              shuffle=False
                                             )

Installation

To be able to run the experiments in this repository, the users should install the last lips package from its github repository. The following steps show how to install this package and its dependencies from source.

Requirements

  • Python >= 3.6

Setup a Virtualenv (optional)

Create a Conda env (recommended)

conda create -n venv_lips python=3.10
conda activate venv_lips

Create a virtual environment

cd my-project-folder
pip3 install -U virtualenv
python3 -m virtualenv venv_lips

Enter virtual environment

source venv_lips/bin/activate

Install using Python Package Index (PyPI)

pip install "lips-benchmark[recommended]"

Install from source

git clone https://github.com/IRT-SystemX/LIPS.git
cd LIPS
pip3 install -U .[recommended]
cd ..

To contribute

pip3 install -e .[recommended]

Codabench

To see the leaderboard for benchmarking tasks, refer to the codabench page of the framework, accessible from this link.

Getting Started

Some Jupyter notebook are provided as tutorials for LIPS package. They are located in the getting_started directories.

Documentation

The documentation is accessible from here.

To generate locally the documentation:

pip install sphinx
pip install sphinx-rtd-theme
cd docs
make clean
make html

Contribution

  • Supplementary features could be requested using github issues.
  • Other contributions are welcomed and can be integrated using pull requests.

FAQ

Pytorch

To be able to use the torch library with GPU, you should consider multiple factors:

  • if you have a compatible GPU, in this case you can install the last cuda driver (11.6) and install torch using the following command:
pip install torch --pre --extra-index-url https://download.pytorch.org/whl/nightly/cu116

To take the advantage of the GPU when training models, you should indicate it via the device parameter as follows:

from lips.augmented_simulators.torch_models.fully_connected import TorchFullyConnected
from lips.augmented_simulators.torch_simulator import TorchSimulator
from lips.dataset.scaler import StandardScaler

torch_sim = TorchSimulator(name="torch_fc",
                           model=TorchFullyConnected,
                           scaler=StandardScaler,
                           device="cuda:0",
                          )
  • Otherwise, if you want use only CPU for the training of augmented simulators, you could simply use the version installed following the the requirements and set the device parameter to cpu when training as follows:
torch_sim = TorchSimulator(name="torch_fc",
                           model=TorchFullyConnected,
                           scaler=StandardScaler,
                           device="cpu",
                          )

Tensorflow

To be able to use Tensorflow, you should already install a cuda compatible version with your tensorflow package. From Tensorflow 2.4 the cuda version >= 11.0 is required. Once you have get and installed cuda driver (we recommend version 11.5) from here, you should also get corresponding cuDNN package from here and copy the contents in corresponding folders in cuda installation directory. Finally, you should set some environment variables which are discussed in this link for both linux and windows operating systems. For windiows you can do the following in command line:

SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\bin;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\extras\CUPTI\lib64;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\include;%PATH%
SET PATH=C:\tools\cuda\bin;%PATH%
SET LD_LIBRARY_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.5\lib\x64

However, after setting variables if you encounter some *.dll not found when importing tensorflow library, you could indicating the path to cuda installation in your code before importing tensorflow package as follows:

import os
os.add_dll_directory("C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.5/bin")

And then you can test your installation by running:

import tensorflow as tf
tf.config.list_physical_devices()

Where in its output the GPU device should be appeared as following:

[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'),
 PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

License information

Copyright 2022-2023 IRT SystemX & RTE

IRT SystemX: https://www.irt-systemx.fr/
RTE: https://www.rte-france.com/

This Source Code is subject to the terms of the Mozilla Public License (MPL) v2 also available here

Citation

@article{leyli2022lips,
    title={LIPS-Learning Industrial Physical Simulation benchmark suite},
    author={M. Leyli-Abadi, and A. Marot, Antoine and J. Picault, and D. Danan and M. Yagoubi, B. Donnot, S. Attoui, P. Dimitrov, A. Farjallah, C. Etienam},
    journal={Advances in Neural Information Processing Systems},
    volume={35},
    pages={28095--28109},
    year={2022}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lips_benchmark-0.2.7.tar.gz (3.9 MB view details)

Uploaded Source

Built Distribution

lips_benchmark-0.2.7-py3-none-any.whl (4.0 MB view details)

Uploaded Python 3

File details

Details for the file lips_benchmark-0.2.7.tar.gz.

File metadata

  • Download URL: lips_benchmark-0.2.7.tar.gz
  • Upload date:
  • Size: 3.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.14

File hashes

Hashes for lips_benchmark-0.2.7.tar.gz
Algorithm Hash digest
SHA256 282208b7fa4f3811c7e1ea7af3535432fdc3db5bdf1055352facd01647c1c74f
MD5 de85970ddaf1c55d8062eb77f0443cb0
BLAKE2b-256 ed39b574a1f026e94bc62445ff7fc604b531c2c43db64d52964c9fdbcdba49c3

See more details on using hashes here.

File details

Details for the file lips_benchmark-0.2.7-py3-none-any.whl.

File metadata

File hashes

Hashes for lips_benchmark-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 9065585dc39651fb28822113fcab10bb5e7f387edf71768254e9572ce4019a30
MD5 9e9720debef7f95840ddc21029907b63
BLAKE2b-256 1637d74fab721edca9a42566bd7aab0b9b775dcedd5740ec6fd8ed8b2394c809

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page