Skip to main content

Frequency-domain (nonlinear) state-space identification using JAX.

Project description

freq-statespace

A flexible JAX-based package for nonlinear state-space identification using frequency-domain optimization techniques.

It’s built around the nonlinear Linear Fractional Representation (NL-LFR) model structure, a powerful block-oriented framework that connects an LTI system with a static feedback nonlinearity. This internal feedback setup is key to capturing complex behaviors found in many real-world systems.

Basic usage

The package works with input–output data sequences $u(n)$ and $y(n)$ for $n = 0, \ldots, N-1$, assuming the system is excited by a periodic input and that an integer number of steady-state output periods has been recorded. The specific NL-LFR structure is defined as:

  \begin{align*}
    x(n+1) &= A x(n) + B_u u(n) + B_w w(n),\\
    y(n) &= C_y x(n) + D_{yu} u(n) + D_{yw} w(n),\\
    z(n) &= C_z x(n) + D_{zu} u(n),\\ 
    w(n) &= f\big(z(n)\big),
  \end{align*}

consisting of linear state-space matrices and a static nonlinear function approximator $f(\cdot)$.

A typical step-wise identification procedure is as follows:

  1. Best Linear Approximation (BLA) estimation. Initializes the matrices $A$, $B_u$, $C_y$ and $D_{yu}$ using the frequency-domain subspace method, and refines these estimates through iterative optimization. If you're only interested in linear state-space models, you can stop the identification process here.

  2. NL-LFR initialization. Applies the frequency-domain inference and learning method to efficiently initialize the remaining model parameters while keeping the BLA parameters fixed. This step requires $f(\cdot)$ to be a linear-in-the-parameters model, for example one based on polynomial basis functions.

  3. NL-LFR optimization. Performs iterative refinement of all model parameters using time-domain simulations. This is the most computationally demanding step, mainly due to the sequential nature of the forward simulations. Fortunately, the previous steps should have provided an initialization that is already close to a good local minimum.

It is also possible to skip the inference and learning step and go straight to nonlinear optimization. An advantage of this approach is that it puts no restriction on the structure of $f(\cdot)$, i.e., it does not require a model that is linear in the parameters.

Features

  • Provides two workflows for identifying nonlinear LFR state-space models by primarily exploiting a frequency-domain formulation that enables inherent parallelism.
  • Leverages JAX for automatic differentiation, JIT compilation, and GPU/TPU acceleration.
  • Supports Optimistix solvers (Levenberg–Marquardt, BFGS, ...) for structured system identification problems.
  • Supports Optax optimizers (Adam, SGD, ...) for large-scale or stochastic optimization.

Installation

pip install freq-statespace

If JAX isn't already installed in your environment, this will install the CPU-only version. For GPU/TPU support (strongly recommended, often many times faster for mid-size to large problems), follow the JAX installation guide.

Quick example

We show an exemplary training pipeline on the Silverbox benchmark dataset, containing input-output measurements from an electronic circuit that mimics a mass-spring-damper system with a cubic spring nonlinearity.

We first estimate the BLA:

import freq_statespace as fss

data = fss.load_and_preprocess_silverbox_data()  # 8192 x 6 samples

# Step 1: BLA estimation
nx = 2  # state dimension
q = nx + 1  # subspace dimensioning parameter
bla = fss.lin.subspace_id(data, nx, q)  # NRMSE 18.36%, non-iterative
bla = fss.lin.optimize(bla, data)  # NRMSE 13.17%, 6 iters, 1.97ms/iter

Next, we proceed with inference and learning, followed by full nonlinear optimization:

# Step 2: Inference and learning
phi = fss.f_static.basis.Polynomial(nz=1, degree=3)
nllfr = fss.nonlin.inference_and_learning(
    bla, data, phi=phi, nw=1, lambda_w=1e-2, fixed_point_iters=5
)  # NRMSE 1.11%, 42 iters, 13.2ms/iter

# Step 3: Nonlinear optimization
nllfr = fss.nonlin.optimize(nllfr, data)  # NRMSE 0.44%, 100 iters, 387ms/iter

Alternatively, we could skip inference and learning and jump straight to nonlinear optimization. In this example we use a neural network:

import jax

# Step 2: Nonlinear optimization
neural_net = fss.f_static.NeuralNetwork(
    nw=1, nz=1, num_layers=1, num_neurons_per_layer=10, activation=jax.nn.relu
)
nllfr = fss.nonlin.construct(bla, neural_net)
nllfr = fss.nonlin.optimize(nllfr, data)  # NRMSE 0.54%, 100 iters, 356ms/iter

Note: Iteration timings were measured on an NVIDIA T600 Laptop GPU.

Citation

If you use this code in your work, please cite it as (arXiv link):

@article{floren2025inference,
  title={Inference and Learning of Nonlinear LFR State-Space Models},
  author={Floren, Merijn and No{\"e}l, Jean-Philippe and Swevers, Jan},
  journal={IEEE Control Systems Letters},
  year={2025},
  publisher={IEEE}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

freq_statespace-0.1.0.tar.gz (60.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

freq_statespace-0.1.0-py3-none-any.whl (39.6 kB view details)

Uploaded Python 3

File details

Details for the file freq_statespace-0.1.0.tar.gz.

File metadata

  • Download URL: freq_statespace-0.1.0.tar.gz
  • Upload date:
  • Size: 60.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.12

File hashes

Hashes for freq_statespace-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ea480e20c36fc9d7d78f7a1cf608282dd8b544e519ff032dce7a87439ac15ec4
MD5 bb0113eb65fbfd88b67cdaab2ed0a214
BLAKE2b-256 1237689e532d9e004741be3f9eb669743683954ed134e26bd4a438f1b60e9743

See more details on using hashes here.

File details

Details for the file freq_statespace-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for freq_statespace-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b31723b77c27f1155720f1fedcd3f3b6d5f2e8695c4ad355349a6c51ead33ce3
MD5 31a57a106abb9ca1dfe7353b9f17084d
BLAKE2b-256 36c3bbd1ef4db3c4aaef5462d67d0d74b9d70c0751ac802d2c5052c9f1736445

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page