Skip to main content

Export ML models represented as ONNX files to Functional-Mockup-Units (FMU)

Project description

pypi versions license ci docs

mlfmu

MLFMU serves as a tool for developers looking to integrate machine learning models into simulation environments. It enables the creation of Functional Mock-up Units (FMUs), which are simulation models that adhere to the FMI standard (https://fmi-standard.org/), from trained machine learning models exported in the ONNX format (https://onnx.ai/). The mlfmu package streamlines the process of transforming ONNX models into FMUs, facilitating their use in a wide range of simulation platforms that support the FMI standard such as the Open Simulation Platform or DNV's Simulation Trust Center

Features

  • Compile trained ML models into FMUs (Functional Mock-up Units).
  • Easy to integrate in building pipelines.
  • Declarative solution, just define what the inputs/outputs/parameters of your co-simulation model should look like and MLFMU will take care of the rest.
  • Support for FMU signal vectors in FMI 2.0.
  • Advanced customizations by enabling you to change the C++ code of the FMU.

Installation

pip install mlfmu

Creating ML FMUs

Create your own ML model

Before you use this mlfmu tool, you should create your machine learning (ML) model, using whatever your preferred tool is.

  1. Define the architecture of your ML model and prepare the model to receive the inputs following to MLFMU's input format.

Note 1: This example subclasses a Keras model for demonstration purposes. However, the tool is flexible and can accommodate other frameworks such as PyTorch, TensorFlow, Scikit-learn, and more.

Note 2: We showcase a simple example here. For more detailed information on how you can prepare your model to be compatible with this tool, see MLMODEL.md

# Create your ML model
class MlModel(tf.keras.Model):
    def init(self, num_inputs = 2):
        # 1 hidden layer, 1 output layer
        self.hidden_layer = tf.keras.layers.Dense(512, activation=tf.nn.relu)
        self.output_layer = tf.keras.layers.Dense(1, activation=None)

    ...

    def call(self, all_inputs): # model forward pass
        # unpack inputs
        inputs, *_ = all_inputs

        # Do something with the inputs
        # Here we have 1 hidden layer
        d1 = self.hidden_layer(inputs)
        outputs = self.output_layer(d1)

        return outputs
    ...
  1. Train your model, then save it as an ONNX file, e.g.:
import onnx

ml_model = MlModel()
# compile: configure model for training
ml_model.compile(optimizer=tf.optimizers.RMSProp, loss='mse')
# fit: train your ML model for some number of epochs
ml_model.fit(training_dataset, epochs=nr_epochs)

# Save the trained model as ONNX at a specified path
onnx_model = tf2onnx.convert.from_keras(ml_model)
onnx.save(onnx_model, 'path/to/save')
  1. (Optional) You may want to check your onnx file to make sure it produces the right output. You can do this by loading the onnx file and (using the same test input) compare the onnx model predictions to your original model predictions. You can also check the model using Netron: https://netron.app/ or https://github.com/lutzroeder/netron

Preparing for and using MLFMU

Given that you have an ML model, you now need to:

  1. Prepare the FMU interface specification (.json), to specify your FMU's inputs, parameters, and output, map these to the ML model's inputs and output (agentInputIndexes) and to specify whether it uses time (usesTime).
// Interface.json
{
    "name": "MyMLFMU",
    "description": "A Machine Learning based FMU",
    "usesTime": true,
    "inputs": [
        {
            "name": "input_1",
            "description": "My input signal to the model at position 0",
            "agentInputIndexes": ["0"]
        },
        {
            "name": "input_2",
            "description": "My input signal as a vector with four elements at position 1 to 5",
            "agentInputIndexes": ["1:5"],
            "type": "real",
            "isArray": true,
            "length": 4
        }
    ],
    "parameters": [
        {
            "name": "parameter_1",
            "description": "My input signal to the model at position 1",
            "agentInputIndexes": ["1"]
        }
    ],
    "outputs": [
        {
            "name": "prediction",
            "description": "The prediction generated by ML model",
            "agentOutputIndexes": ["0"]
        }
    ]
}

More information about the interface.json schema can be found in the mlfmu\docs\interface\schema.html

  1. Compile the FMU:
mlfmu build --interface-file interface.json --model-file model.onnx

or if the files are in your current working directory:

mlfmu build

Extended documentation

For more explanation on the ONNX file structure and inputs/outputs for your model, please refer to mlfmu's MLMODEL.md.

For advanced usage options, e.g. editing the generated FMU source code, or using the tool via a Python class, please refer to mlfmu's ADVANCED.md.

Development Setup

1. Install uv

This project uses uv as package manager. If you haven't already, install uv, preferably using it's "Standalone installer" method:
..on Windows:

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

..on MacOS and Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh

(see docs.astral.sh/uv for all / alternative installation methods.)

Once installed, you can update uv to its latest version, anytime, by running:

uv self update

2. Install Visual Studio Build Tools

We use conan for building the FMU. For the conan building to work later on, you will need the Visual Studio Build tools 2022 to be installed. It is best to do this before installing conan (which gets installed as part of the package dependencies, see step 5). You can download and install the Build Tools for VS 2022 (for free) from https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022.

3. Clone the repository

Clone the mlfmu repository into your local development directory:

git clone https://github.com/dnv-opensource/mlfmu path/to/your/dev/mlfmu
git submodule update --init --recursive

4. Install dependencies

Run uv sync to create a virtual environment and install all project dependencies into it:

uv sync

Use the command line option -p to specifiy the Python version to resolve the dependencies against. For instance, use -p 3.12 to specify Python 3.12 .

uv sync -p 3.12

Note: In case the specified Python version is not found on your machine, uv sync will automatically download and install it.

Optionally, use -U in addition to allow package upgrades. Especially in cases when you change to a newer Python version, adding -U can be useful.
It allows the dependency resolver to upgrade dependencies to newer versions, which might be necessary to support the (newer) Python version you specified.

uv sync -p 3.12 -U

Note: At this point, you should have conan installed. You will want to make sure it has the correct build profile. You can auto-detect and create the profile by running conan profile detect. After this, you can check the profile in C:\Users\<USRNAM>\.conan2\profiles\.default (replace <USRNAM> with your username). You want to have: compiler=msvc, compiler.cppstd=17, compiler.version=193 (for Windows).

5. (Optional) Activate the virtual environment

When using uv, most of the time there will be no longer a need to manually activate the virtual environment.
Whenever you run a command via uv run inside your project folder structure, uv will find the .venv virtual environment in the working directory or any parent directory, and activate it on the fly:

uv run <command>

However, you still can manually activate the virtual environment if needed. While we did not face any issues using VS Code as IDE, you might e.g. use an IDE which needs the .venv manually activated in order to properly work.
If this is the case, you can anytime activate the virtual environment using one of the "known" legacy commands:
..on Windows:

.venv\Scripts\activate.bat

..on Linux:

source .venv/bin/activate

6. Install pre-commit hooks

The .pre-commit-config.yaml file in the project root directory contains a configuration for pre-commit hooks. To install the pre-commit hooks defined therein in your local git repository, run:

uv run pre-commit install

All pre-commit hooks configured in .pre-commit-config.yaml will now run each time you commit changes.

7. Test that the installation works

To test that the installation works, run pytest in the project root folder:

uv run pytest

8. Run an example

cd .\examples\wind_generator\config\
uv run mlfmu build

As an alternative, you can run from the main directory:

uv run mlfmu build --interface-file .\examples\wind_generator\config\interface.json --model-file .\examples\wind_generator\config\example.onnx

Note: wherever you run the build command from, is where the FMU file will be created, unless you specify otherwise with --fmu-path.

For more options, see uv run mlfmu --help or uv run mlfmu build --help.

9. Use your new ML FMU

The created FMU can be used for running (co-)simulations. We have tested the FMUs that we have created in the Simulation Trust Center, which uses the Open Simulation Platform software.

10. Compiling the documentation

This repository uses sphinx with .rst and .md files as well as Python docstrings, to document the code and usage. To locally build the docs:

cd docs
make html

You can then open index.html for access to all docs (for Windows: start build\html\index.html).

Meta

All code in mlfmu is DNV intellectual property.

Copyright (c) 2024 DNV AS. All rights reserved.

Primary contributors:

Kristoffer Skare - @LinkedIn - kristoffer.skare@dnv.com

Jorge Luis Mendez - @LinkedIn - jorge.luis.mendez@dnv.com

Additional contributors (testing, docs, examples, etc.):

Melih Akdağ - @LinkedIn - melih.akdag@dnv.com

Stephanie Kemna - @LinkedIn

Hee Jong Park - @LinkedIn - hee.jong.park@dnv.com

Contributing

  1. Fork it (https://github.com/dnv-opensource/mlfmu/fork) (Note: this is currently disabled for this repo. For development, continue with the next step.)
  2. Create an issue in your GitHub repo
  3. Create your branch based on the issue number and type (git checkout -b issue-name)
  4. Evaluate and stage the changes you want to commit (git add -i)
  5. Commit your changes (git commit -am 'place a descriptive commit message here')
  6. Push to the branch (git push origin issue-name)
  7. Create a new Pull Request in GitHub

For your contribution, please make sure you follow the STYLEGUIDE before creating the Pull Request.

Errors & fixes

  • If you get an error similar to ..\fmu.cpp(4,10): error C1083: Cannot open include file: 'cppfmu_cs.hpp': No such file or directory, you are missing cppfmu. This is a submodule to this repository. Make sure that you do a git submodule update --init --recursive in the top level folder.

License & dependencies

This code is distributed under the BSD 3-Clause license. See LICENSE for more information.

It makes use of cpp-fmu, which is distributed under the MPL license at https://github.com/viproma/cppfmu.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlfmu-1.0.1.tar.gz (193.4 kB view details)

Uploaded Source

Built Distribution

mlfmu-1.0.1-py3-none-any.whl (92.2 kB view details)

Uploaded Python 3

File details

Details for the file mlfmu-1.0.1.tar.gz.

File metadata

  • Download URL: mlfmu-1.0.1.tar.gz
  • Upload date:
  • Size: 193.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for mlfmu-1.0.1.tar.gz
Algorithm Hash digest
SHA256 d9a9edab3191afb51ff1ece3133acce3095e7ae0b09b9694cbcb656fd5284e66
MD5 d012f74b9761574cd282ce1233f2f447
BLAKE2b-256 85ae8070722257376ee73f0cc3ce2255b132e32344a0ed71b41c5e131273e60e

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlfmu-1.0.1.tar.gz:

Publisher: publish_release.yml on dnv-opensource/mlfmu

Attestations:

File details

Details for the file mlfmu-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: mlfmu-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 92.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for mlfmu-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bcb0612159a8e9a575b1ba0f3b17d35b3d70059b58ad88553b21414e50e903c0
MD5 ed0f6062455de663ab7415e5b8f68f41
BLAKE2b-256 deea7046b560ef3bb59d84dd06501a8ce49f5dec5e7a1aedb0ce6ef3202493fc

See more details on using hashes here.

Provenance

The following attestation bundles were made for mlfmu-1.0.1-py3-none-any.whl:

Publisher: publish_release.yml on dnv-opensource/mlfmu

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page