Model-structured neural network framework for the modeling and control of physical systems
Project description
Model-structured neural network framework for the modeling and control of physical systems
Model-Structured neural networks (MSNNs) are a new neural networks concept. These networks base their structure on mechanical and control theory laws.
The framework's goal is to allow the users fast modeling and control of a mechanical system such as an autonomous vehicle, an industrial robot, a walking robot, a flying drone.
Below is the workflow that the framework follows.
Using a conceptual representation of your mechanical system the framework generates the structured neural network of model of mechanical device considered. Providing suitable experimental data, the framework will realize an effective training of the neural models by appropriately choosing all the hyper-parameters. The framework will allow the user to synthesize and train a structured neural network that will be used as a control system in a few simple steps and without the need to perform new experiments. The realized neural controller will be exported using C language or ONNX, and it will be ready to use.
Table of Contents
Getting Started
Installation
You can install the nnodely framework from PyPI via:
pip install nnodely
Prerequisites
You can install the dependencies of the nnodely framework from PyPI via:
pip install -r requirements.txt
Basic Functionalities
Build the structured neural model
The structured neural model is defined by a list of inputs by a list of outputs and by a list of relationships that link the inputs to the outputs.
Let's assume we want to model one of the best-known linear mechanical systems, the mass-spring-damper system.
The system is defined as the following equation:
M \ddot x = - k x - c \dot x + F
Suppose we want to estimate the value of the future position of the mass given the initial position and the external force.
In the nnodely framework we can build an estimator in this form:
x = Input('x')
F = Input('F')
x_z_est = Output('x_z_est', Fir(x.tw(1))+Fir(F.last()))
The first thing we define the input variable of the system.
Input variabiles can be created using the Input function.
In our system we have two inputs the position of the mass, x, and the external force, F, exerted on the mass.
The Output function is used to define an output of our model.
The Output gets two inputs, the first is the name of the output and the second is the structure of the estimator.
Let's explain some of the functions used:
- The
tw(...)function is used to extract a time window from a signal. In particular we extract a time window of 1 second. - The
last()function that is used to get the last force applied to the mass. - The
Fir(...)function to build an FIR filter with the tunable parameters on our input variable.
So we are creating an estimator for the variable x at the instant following the observation (the future position of the mass) by building an
observer that has a mathematical structure equal to the one shown below:
x[1] = \sum_{k=0}^{N_x-1} x[-k]\cdot h_x[(N_x-1)-k] + F[0]\cdot h_F
Where the variables $N_x$, and $h_f$ also the values of the vectors $h_x$ are still unknowns. Regarding $N_x$, we know that the window lasts one second but we do not know how many samples it corresponds to and this depends on the discretization interval. The formulation above is equivalent to the formulation of the discrete time response of the system if we choose $N_x = 3$ and $h_x$ equal to the characteristic polynomial and $h_f = T^2/M$ (with $T$ sample time). Our formulation is more general and can take into account the noise of the measured variable using a bigger time window. The estimator can also be seen as the composition of the force contributions due to the position and velocity of the mass plus the contribution of external forces.
Neuralize the structured neural model
Let's now try to train our observer using the data we have. We perform:
mass_spring_damper = Modely()
mass_spring_damper.addModel('x_z_est', x_z_est)
mass_spring_damper.addMinimize('next-pos', x.z(-1), x_z_est, 'mse')
mass_spring_damper.neuralizeModel(0.2)
Let's create a nnodely object, and add one output to the network using the addModel function.
This function is needed for create an output on the model. In this example it is not mandatory because the same output is added also to the minimizeError function.
In order to train our model/estimator the function addMinimize is used to add a loss function to the list of losses.
This function takes:
- The name of the error, it is presented in the results and during the training.
- The second and third inputs are the variable that will be minimized, the order is not important.
- The minimization function used, in this case 'mse'.
In the function
addMinimizeis used thez(-1)function. This function get from the dataset the future value of a variable (in our case the position of the mass), the next instant, using the Z-transform notation,z(-1)is equivalent tonext()function. The functionz(...)method can be used on anInputvariable to get a time shifted value.
The obective of the minimization is to reduce the error between
x_z that represent one sample of the next position of the mass get from the dataset and
x_z_est is one sample of the output of our estimator.
The matematical formulation is as follow:
\frac{1}{n} \sum_{i=0}^{n} (x_{z_i} - x_{{z\_est}_i})^2
where n represents the number of sample in the dataset.
Finally the function neuralizeModel is used to perform the discretization. The parameter of the function is the sampling time and it will be chosen based on the data we have available.
Load the dataset
data_struct = ['time','x','dx','F']
data_folder = './tutorials/datasets/mass-spring-damper/data/'
mass_spring_damper.loadData(name='mass_spring_dataset', source=data_folder, format=data_struct, delimiter=';')
Finally, the dataset is loaded. nnodely loads all the files that are in a source folder.
Train the structured neural network
Using the dataset created the training is performed on the model.
mass_spring_damper.trainModel()
Test the structured neural model
In order to test the results we need to create a input, in this case is defined by:
xwith 5 sample because the sample time is 0.2 and the window ofxis 1 second.Fis one sample because only the last sample is needed.
sample = {'F':[0.5], 'x':[0.25, 0.26, 0.27, 0.28, 0.29]}
results = mass_spring_damper(sample)
print(results)
The result variable is structured as follow:
>> {'x_z_est':[0.4]}
The value represents the output of our estimator (means the next position of the mass) and is close as possible to x.next() get from the dataset.
The network can be tested also using a bigger time window
sample = {'F':[0.5, 0.6], 'x':[0.25, 0.26, 0.27, 0.28, 0.29, 0.30]}
results = mass_spring_damper(sample)
print(results)
The value of x is build using a moving time window.
The result variable is structured as follow:
>> {'x_z_est':[0.4, 0.42]}
The same output can be generated calling the network using the flag sampled=True in this way:
sample = {'F':[[0.5],[0.6]], 'x':[[0.25, 0.26, 0.27, 0.28, 0.29],[0.26, 0.27, 0.28, 0.29, 0.30]]}
results = mass_spring_damper(sample,sampled=True)
print(results)
Structure of the Repository
nnodely folder
This folder contains all the nnodely library files, the main files are the following:
- activation.py this file contains all the activation functions.
- arithmetic.py this file contains the aritmetic functions as: +, -, /, *., ^2
- fir.py this file contains the finite inpulse response filter function. It is a linear operation without bias on the second dimension.
- fuzzify.py contains the operation for the fuzzification of a variable, commonly used in the local model as activation function.
- input.py contains the Input class used for create an input for the network.
- linear.py this file contains the linear function. Typical Linear operation
W*x+boperated on the third dimension. - localmodel.py this file contains the logic for build a local model.
- ouptut.py contains the Output class used for create an output for the network.
- parameter.py contains the logic for create a generic parameters
- parametricfunction.py are the user custom function. The function can use the pytorch syntax.
- part.py are used for selecting part of the data.
- trigonometric.py this file contains all the trigonometric functions.
- nnodely.py the main file for create the structured network
- model.py containts the pytorch template model for the structured network
Tests Folder
This folder contains the unittest of the library in particular each file test a specific functionality.
Examples of usage Folder
The files in the examples folder are a collection of the functionality of the library. Each file present in deep a specific functionality or function of the framework. This folder is useful to understand the flexibility and capability of the framework.
Overview on signal shape
In this section is explained the shape of the input/output of the network.
Input and output shape from the structured neural model
The structured network can be called in two way:
- The shape of the inputs not sampled are [total time window size, dim] Sampled inputs are reconstructed as soon as the maximum size of the time window is known. 'dim' represents the size of the input if is not 1 means that the input is a vector.
- The shape of the sampled inputs are [number of samples = batch, size of time window for a sample, dim]
In the example presented before in the first call the shape for
xare [1,5,1] forFare [1,1,1] in the second call forxare [2,5,1] forFare [2,1,1]. In both cases the last dimensions is ignored as the input are scalar. The output of the structured neural model The outputs are defined in this way for the different cases: - if the shape is [batch, 1, 1] the final two dimensions are collapsed result [batch]
- if the shape is [batch, window, 1] the last dimension is collapsed result [batch, window]
- if the shape is [batch, window, dim] the output is equal to [batch, window, dim]
- if the shape is [batch, 1, dim] the output is equal to [batch, 1, dim]
In the example
x_z_esthas the shape of [1] in the first call and [2] because the the window and the dim were equal to 1.
Shape of elementwise Arithmetic, Activation, Trigonometric
The shape and time windows remain unchanged, for the binary operators shape must be equal.
input shape = [batch, window, dim] -> output shape = [batch, window, dim]
Shape of Fir input/output
The input must be scalar, the fir compress di time dimension (window) that goes to 1. A vector input is not allowed. The output dimension of the Fir is moved on the last dimension for create a vector output.
input shape = [batch, window, 1] -> output shape = [batch, 1, output dimension of Fir = output_dimension]
Shape of Linear input/output
The window remains unchanged and the output dimension is user defined.
input shape = [batch, window, dimension] -> output shape = [batch, window, output dimension of Linear = output_dimension]
Shape of Fuzzy input/output
The function fuzzify the input and creates a vector for output. The window remains unchanged, input must be scalar. Vector input are not allowed.
input shape = [batch, window, 1] -> output shape = [batch, window, number of centers of Fuzzy = len(centers)]
Shape of Part and Select input/output
Part selects a slice of the vector input, the input must be a vector. Select operation the dimension becomes 1, the input must be a vector. For both operation if there is a time component it remains unchanged.
Part input shape = [batch, window, dimension] -> output shape = [batch, window, selected dimension = [j-i]]
Select input shape = [batch, window, dimension] -> output shape = [batch, window, 1]
Shape of TimePart, SimplePart, SampleSelect input/output
The TimePart selects a time window from the signal (works like timewindow tw([i,j]) but in this the i,j are absolute).
The SamplePart selects a list of samples from the signal (works like samplewindow sw([i,j]) but in this the i,j are absolute).
The SampleSelect selects a specific index from the signal (works like zeta operation z(index) but in this the index are absolute).
For all the operation the shape remains unchanged.
SamplePart input shape = [batch, window, dimension] -> output shape = [batch, selected sample window = [j-i], dimension]
SampleSelect input shape = [batch, window, dimension] -> output shape = [batch, 1, dimension]
TimePart input shape = [batch, window, dimension] -> output shape = [batch, selected time window = [j-i]/sample_time, dimension]
Shape of LocalModel input/output
The local model has two main inputs, activation functions and inputs. Activation functions have shape of the fuzzy
input shape = [batch, window, 1] -> output shape = [batch, window, number of centers of Fuzzy = len(centers)]
Inputs go through input function and output function.
The input shape of the input function can be anything as long as the output shape of the input function have the following dimensions
[batch, window, 1] so input functions for example cannot be a Fir with output_dimension different from 1.
The input shape of the output function is [batch, window, 1] while the shape of the output of the output functions can be any
Shape of Parameters input/output
Parameter shape are defined as follows [window = sw or tw/sample_time, dim] the dimensions can be defined as a tuple and are appended to window
When the time dimension is not defined it is configured to 1
Shape of Parametric Function input/output
The Parametric functions take inputs and parameters as inputs
Parameter dimensions are the same as defined by the parameters if the dimensions are not defined they will be equal to [window = 1,dim = 1]
Dimensions of the inputs inside the parametric function are the same as those managed within the Pytorch framework equal to [batch, window, dim]
Output dimensions must follow the same convention [batch, window, dim]
License
This project is released under the license License: MIT.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nnodely-0.22.5.tar.gz.
File metadata
- Download URL: nnodely-0.22.5.tar.gz
- Upload date:
- Size: 107.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba57821be8de2c01269e81346b4b8a873385093e7146a859d1401f1be848d217
|
|
| MD5 |
0a94b29f4254f1da12e8a3278acd9a3e
|
|
| BLAKE2b-256 |
a3f6dd4e228bf8e60206e1ffdba52ffe89e00d214e5455063c40eee5c2a40f3d
|
Provenance
The following attestation bundles were made for nnodely-0.22.5.tar.gz:
Publisher:
create-release.yml on tonegas/nnodely
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nnodely-0.22.5.tar.gz -
Subject digest:
ba57821be8de2c01269e81346b4b8a873385093e7146a859d1401f1be848d217 - Sigstore transparency entry: 153686817
- Sigstore integration time:
-
Permalink:
tonegas/nnodely@b3e9ae43538568677917483587903bbc1a844dcf -
Branch / Tag:
refs/heads/release/v0.22.5 - Owner: https://github.com/tonegas
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
create-release.yml@b3e9ae43538568677917483587903bbc1a844dcf -
Trigger Event:
push
-
Statement type:
File details
Details for the file nnodely-0.22.5-py3-none-any.whl.
File metadata
- Download URL: nnodely-0.22.5-py3-none-any.whl
- Upload date:
- Size: 71.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
990ecc47ff4334b14fa9b70b62eee3a32397f3b75500c4cf96a9e232c6068f10
|
|
| MD5 |
e84467eb302a2dc767dd4a39bbef7de7
|
|
| BLAKE2b-256 |
ae8f5a4f40813661e4d502977df89ccd23022588eab87edb04a2cd63f019b2a6
|
Provenance
The following attestation bundles were made for nnodely-0.22.5-py3-none-any.whl:
Publisher:
create-release.yml on tonegas/nnodely
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
nnodely-0.22.5-py3-none-any.whl -
Subject digest:
990ecc47ff4334b14fa9b70b62eee3a32397f3b75500c4cf96a9e232c6068f10 - Sigstore transparency entry: 153686819
- Sigstore integration time:
-
Permalink:
tonegas/nnodely@b3e9ae43538568677917483587903bbc1a844dcf -
Branch / Tag:
refs/heads/release/v0.22.5 - Owner: https://github.com/tonegas
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
create-release.yml@b3e9ae43538568677917483587903bbc1a844dcf -
Trigger Event:
push
-
Statement type: