Skip to main content

The official dataload for http://www.nonlinearbenchmark.org/

Project description

nonlinear_benchmarks

The official dataloader of nonlinearbenchmark.org. This toolbox simplifies the process of downloading, loading, and splitting various datasets available on the website. It also provides basic instructions on submitting benchmark results.

Usage Example

https://www.nonlinearbenchmark.org/benchmarks/wiener-hammerstein is loaded as:

import nonlinear_benchmarks
train_val, test = nonlinear_benchmarks.WienerHammerBenchMark()

print(train_val) 
# prints : Input_output_data "train WH" u.shape=(100000,) y.shape=(100000,)
#          sampling_time=1.953e-05
print(test)
# prints: Input_output_data "test WH" u.shape=(78800,) y.shape=(78800,) 
#         sampling_time=1.953e-05 state_initialization_window_length=50

sampling_time = train_val.sampling_time # in seconds
u_train, y_train = train_val            # to unpack or use train_val.u, train_val.y
u_test, y_test   = test                 # to unpack or use test.u,      test.y
print(test.state_initialization_window_length) 
#state_initialization_window_length = The number of samples that can be used at the 
#                                     start of the test set to initialize the model state.

print(train_val[:100])                  # creates a slice of the train_val data from 0 to 100

Useful Options

When using the WienerHammerBenchMark (or any other benchmark function), you can customize the behavior with the following options:

  • data_file_locations=True : Returns the raw data file locations.
  • train_test_split=False : Retrieves the entire dataset without splitting.
  • force_download=True : Forces (re-)downloading of benchmark files.
  • url= : Allows manual override of the download link (contact maintainers if the default link is broken).
  • atleast_2d=True: Converts input/output arrays to at least 2D shape (e.g. u.shape = (250,) becomes u.shape = (250, 1)).
  • always_return_tuples_of_datasets=True: Even if there is only a single training or test set a list is still returned (i.e. adds [train] if not isinstance(train,list) else train)

Install

pip install nonlinear-benchmarks

Datasets

Multiple datasets have been implemented with an official train test split which are given below.

(p.s. datasets without an official train test split can be found in nonlinear_benchmarks.not_splitted_benchmarks)

Electro-Mechanical Positioning System (EMPS)

image

train_val, test = nonlinear_benchmarks.EMPS()
print(test.state_initialization_window_length) # = 20
train_val_u, train_val_y = train_val
test_u, test_y = test

Benchmark Results Submission template: submission_examples/EMPS.py (report accuracy in [ticks/s])

Coupled Electric Drives (CED)

image

train_val, test = nonlinear_benchmarks.CED()
print(test[0].state_initialization_window_length) # = 10
(train_val_u_1, train_val_y_1), (train_val_u_2, train_val_y_2) = train_val
(test_u_1, test_y_1), (test_u_2, test_y_2) = test

This dataset consists of two time series where the first has a low input amplitude (train_val_1 and test_1) and the second a high input amplitude (train_val_2 and test_2).

You can use both training sets in your training, and please report the RMSE values on both test sets separately.

Benchmark Results Submission template: submission_examples/CED.py (report accuracy in [mm])

Cascaded Tanks with Overflow (Cascaded_Tanks)

image

train_val, test = nonlinear_benchmarks.Cascaded_Tanks()
print(test.state_initialization_window_length) # = 50
train_val_u, train_val_y = train_val
test_u, test_y = test

Benchmark Results Submission template: submission_examples/Cascaded_Tanks.py (report accuracy in [V])

Wiener-Hammerstein System (WienerHammerBenchMark)

image

train_val, test = nonlinear_benchmarks.WienerHammerBenchMark()
print(test.state_initialization_window_length) # = 50
train_val_u, train_val_y = train_val
test_u, test_y = test

Benchmark Results Submission template: submission_examples/WienerHammerBenchMark.py (report accuracy in [mV])

Silverbox

image

train_val, test = nonlinear_benchmarks.Silverbox()
multisine_train_val = train_val
print(test[0].state_initialization_window_length) # = 50 (for all test sets)
test_multisine, test_arrow_full, test_arrow_no_extrapolation = test

Benchmark Results Submission template: submission_examples/silverbox.py (report accuracy in [mV])

Note that the test_arrow_no_extrapolation is a subset of the test_arrow_full.

F-16 Ground Vibration Test

image

train_val, test = nonlinear_benchmarks.F16()
train_val #8 datasets with lenghts 73728 and 108477
test #6 datasets with lenghts 73728 and 108477

Benchmark Results Submission template: submission_examples/F16.py

Parallel Wiener-Hammerstein System

image

train_val, test = nonlinear_benchmarks.ParWH()
train_val #100 datsets with each a length of 32768 with 2 periods created with multisine inputs with 5 different amplitudes and 20 different multisine phases
test #5 datasets with each a length of 32768 with 2 periods of multisine inputs with 5 different amplitudes

Benchmark Results Submission template: submission_examples/ParallelWH.py (report accuracy in [mV])

Error Metrics

We also provide error metrics in nonlinear_benchmarks.error_metrics.

from nonlinear_benchmarks.error_metrics import RMSE, NRMSE, R_squared, MAE, fit_index

#generate example ouput data and prediction 
y_true = np.random.randn(100)
y_model = y_true + np.random.randn(100)/100

print(f"RMSE: {RMSE(y_true, y_model)} (Root Mean Square Error)")
print(f"NRMSE: {NRMSE(y_true, y_model)} (Normalized Root Mean Square Error)")
print(f"R-squared: {R_squared(y_true, y_model)} (coefficient of determination R^2)")
print(f'MAE: {MAE(y_true, y_model)} (Mean Absolute value Error)')
print(f"fit index: {fit_index(y_true, y_model)} (https://arxiv.org/pdf/1902.00683.pdf page 31)")

Benchmark Result Submission

If you would like to submit a benchmark result this can be done through this google form. When reporting the benchmark results please use use the toolbox as follows;

train_val, test = nonlinear_benchmarks.WienerHammerBenchMark()
n = test.state_initialization_window_length

# y_model = your model output using only test.u and test.y[:n]

RMSE_result = RMSE(test.y[n:], y_model[n:]) #skip the first n
print(RMSE_result) #report this number

For details specific to each benchmark see the submission template: submission_examples/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nonlinear_benchmarks-1.0.1.tar.gz (16.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nonlinear_benchmarks-1.0.1-py2.py3-none-any.whl (18.8 kB view details)

Uploaded Python 2Python 3

File details

Details for the file nonlinear_benchmarks-1.0.1.tar.gz.

File metadata

  • Download URL: nonlinear_benchmarks-1.0.1.tar.gz
  • Upload date:
  • Size: 16.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for nonlinear_benchmarks-1.0.1.tar.gz
Algorithm Hash digest
SHA256 c80d3bd1fa26fe21936a221f4dc2314a3052b378910c0abfdb4f7372086fc02f
MD5 407d2832486dd4ed47fddf051250f0d5
BLAKE2b-256 36acd34ba648dbbaa318e469390fdfc07a39d207b23a305c820977db13683c37

See more details on using hashes here.

File details

Details for the file nonlinear_benchmarks-1.0.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for nonlinear_benchmarks-1.0.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 2eb8c80acde37ac149baf2c33ce83752e3f471ce51eb21d3f4cb7e7cc5135309
MD5 b269449b80cb9bdcf16c4146fb91caf5
BLAKE2b-256 ba355ebed171ee138330786b9671443da8a923929f7763df06a28d4b7574863d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page