A tool for wrapping ONNX netwroks in FMUs.
Project description
ONNX2FMU: Encapsulate ONNX models in Functional Mock-up Units (FMUs)
What do ONNX2FMU do? It wraps ONNX models into co-simulation FMUs.
🚀 Get started
- Python 3.10+
- CMake 3.22+
- A code compiler for the host platform, which could be one of Linux, Windows or MacOS.
The default Windows CMake Generator is Visual Studio 2022.
To install ONNX2FMU use
pip install onnx2fmu
in your shell. You don't need to install CMake because it will be installed with the Python package, but you need to install a C compiler (e.g., Visual Studio in Windows, gcc in Linux, etc.)
📝 ONNX model declaration
ONNX2FMU can handle models with multiple inputs, outputs, and local variables. These entries must be listed in the model description JSON file, and their names must match the name of a node in the ONNX model graph.
Model description file
A model description is declared in a JSON file and its schema includes the following global items:
"name"is the model name, which will also be the FMU archive name;"description"provides a generic description of the model;"FMIVersion"is the FMI standard version for generatign the FMU code and the FMU binaries, which can be either2.0or3.0;"inputs"and"outputs"are the lists of inputs and output nodes in the ONNX model;"locals"are mapping between an input and an output node. Their behavior is explained in A model with local variables.
Each entry of the the inputs and output lists is characterized by the following schema:
"name"must match the name of one of the model nodes, whereas"labels"is the list of user-provided names for each of the node elements. The number of names in the"labels"list must match the number of elements of a given entry."description"allows the user to attach a description to each of the arrays.
The following is an example of a model description for a model with three input nodes and one output node.
{
"name": "example1",
"description": "The model defines a simple example model with a scalar input and two vector inputs, one with 'local' variability and one with 'continuous' variability.",
"FMIVersion": "2.0",
"inputs": [
{
"name": "scalar_input",
"description": "A scalar input to the model."
},
{
"name": "vector_input",
"description": "A vector of input variables with variability discrete."
},
{
"name": "vector_input_discrete",
"description": "Inputs have variability discrete by default."
}
],
"outputs": [
{
"name": "output",
"description": "The output array.",
"labels": [
"Class1",
"Class2",
...
]
}
]
}
Variability of model variables
Allowed variables types are input, output, and local.
Admissible variabilities are continuous and discrete;
the default choice is continuous if nothing is specified in the model
description.
Model declaration: A PyTorch example
ONNX2FMU works with any ONNX model, which can be generated by all the major ML/DL frameworks, e.g., PyTorch, TensorFlow, Scikit-Learn, etc. However, we chose PyTorch to show how ONNX2FMU works.
The following model is used in tests/example1 to perform some basic vector
operations.
class ExampleModel(nn.Module):
def __init__(self):
super(ExampleModel, self).__init__()
def forward(self, x1, x2, x3):
# Input x1 is a scalar
# Input x2 is a vector with causality 'local' and 4 elements
# Input x3 is a vector with causality 'countinuous' and 5 elements
x4 = x2 + x3[:4]
x5 = x2 - x3[:4]
x6 = x1 * x3[-1]
x7 = x1 / x3[-1]
x = torch.cat([x4, x5, x6, x7])
return x
All the basic array operations are allowed, which is useful to define not only deep learning models, but also generic, graph-based models.
A more complex examples is explained in tests/example3, where an RNN model
is used to predict the temperature of a point on a metallic plate.
The model is declared as follows
class HeatRNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers, norm_params):
super(HeatRNN, self).__init__()
x_min, x_max, y_min, y_max = norm_params
self.register_buffer("x_min", torch.tensor(x_min))
self.register_buffer("x_max", torch.tensor(x_max))
self.register_buffer("y_min", torch.tensor(y_min))
self.register_buffer("y_max", torch.tensor(y_max))
self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x, h=None):
x = (x - self.x_min) / (self.x_max - self.x_min)
out, _ = self.rnn(x, h)
out = self.fc(out) # Predict next time step
out = out * (self.y_max - self.y_min) + self.y_min
return out
In this example, the normalization parameters, which do not have to be
optimized, are stored in the model using the register_buffer method, which
detaches them from the computational graph.
A model with local variables
Recurrent neural network architectures might require to feed the model with the
output of the model itself in a feedback loop fashion.
In ONNX2FMU, we realize this functionality through FMI local variables.
Mapping model's input to output is necessary though.
In example4, we show how FMUs with local variables are
declared.
The model description file must present a section named locals like the
following:
{
...
"locals": [
{
"nameIn": "X",
"nameOut": "X1",
"description": "The history of states from t-N to t."
},
{
"nameIn": "U",
"nameOut": "U1",
"description": "The history of control variables frmo t-N to t-1."
}
]
}
Local variables requires two names, i.e., nameIn and nameOut, which define
an input-output relationship that feeds the nameIn input node with the
output of the nameOut output node.
The user must take care to provide the right output in the forward method.
In example4, the relationship between input U and output U1 is defined
as follows
class ExampleModel(nn.Module):
def __init__(self):
super(ExampleModel, self).__init__()
def forward(self, u, U, X):
U1 = torch.concat((U[1:, :], u))
x = torch.stack([U1[-3, 0], U1[-2, 1], U1[-1, 2]]).unsqueeze(0)
X1 = torch.concat((X[1:, :], x))
return x, X1, U1
In the example above, the input U is updated with the content of u and
returned by the function with the name U1 (remember that node names cannot be
repeated in ONNX). During the next time step, the FMU will take care to pass
the output to the model as a new, updated input.
🔨 ONNX model generation
ONNX2FMU provides two ways to build an FMU from an ONNX model.
CLI
ONNX2FMU is designed as command line application first. The build command
requires the ONNX model path and the model description path
onnx2fmu build <model.onnx> <modelDescription.json> [OPTIONS]
Built-in functions
FMUs of ONNX models can be build from a Python script by calling the build
function from app.py.
This is particularly useful when training a model and generating the FMU in
the same file.
Generation and compilation
ONNX2FMU provides the possibility to separate generation of the FMU code
and its compilation.
This can be achived using the commands generate and compile, respectively.
To see the documentation of the commands, one can use
onnx2fmu [generate|compile] --help
Separating FMU code generation and compilation allows a user to customize the the FMU source code.
Acknowledgements
The code in this repository is inspired to the Reference FMU project.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file onnx2fmu-0.2.5.tar.gz.
File metadata
- Download URL: onnx2fmu-0.2.5.tar.gz
- Upload date:
- Size: 598.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-requests/2.32.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9468a217d47262f4d273e8f48b6467cd78912c1966fb4b6fe1a3ced05da3b62
|
|
| MD5 |
a64e2b5083d624149e3ea21fd05f2410
|
|
| BLAKE2b-256 |
25261e351015c9e3308d87f9b0499b26ec82b7fa419b9ce349d337070149582f
|
File details
Details for the file onnx2fmu-0.2.5-py3-none-any.whl.
File metadata
- Download URL: onnx2fmu-0.2.5-py3-none-any.whl
- Upload date:
- Size: 130.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-requests/2.32.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f7fd68f0bb29e16f13629e2a8d48cee08b77af0385102747827cf3a38c152cb1
|
|
| MD5 |
b63699a852f5e253b22be1ff7be16a9d
|
|
| BLAKE2b-256 |
47958c8ab41bd224624b3edc1882ab5324ede5f4254087b50a773f6461d0a47a
|