Benchmark problems for OpenTURNS
Project description
otbenchmark
What is it?
The goal of this project is to provide benchmark classes for OpenTURNS. It provides a framework to create usecases which are associated with reference values. Such a benchmark problem may be used in order to check that a given algorithm works as expected and to measure its performance in terms of accuracy and speed.
Two categories of benchmark classes are currently provided:
 reliability problems, i.e. estimating the probability that the output of a function is less than a threshold,
 sensitivity problems, i.e. estimating sensitivity indices, for example Sobol' indices.
Most of the reliability problems were adapted from the RPRepo :
https://rprepo.readthedocs.io/en/latest/
This module allows you to create a problem, run an algorithm and compare the computed probability with a reference probability:
problem = otb.RminusSReliability()
event = problem.getEvent()
pfReference = problem.getProbability() # exact probability
# Create the MonteCarlo algorithm
algoProb = ot.ProbabilitySimulationAlgorithm(event)
algoProb.setMaximumOuterSampling(1000)
algoProb.setMaximumCoefficientOfVariation(0.01)
algoProb.run()
resultAlgo = algoProb.getResult()
pf = resultAlgo.getProbabilityEstimate()
absoluteError = abs(pf  pfReference)
Moreover, we can loop over all problems and run several methods on these problems.
Authors
 Michaël Baudin
 Youssef Jebroun
 Elias Fekhari
 Vincent Chabridon
Installation
To install the module, we can use the "git clone" command:
git clone https://github.com/mbaudin47/otbenchmark.git
cd otbenchmark
python setup.py install
Getting help
The code has docstrings. Hence, using the "help" statement will help. Another way of getting help is to read the examples, which are presented in the next section.
Overview of the benchmark problems
The simplest use cas of the library is in Analysis of the RS case, which shows how to use this problem with two variables to estimate its failure probability. In the Benchmark the GSobol test function problem, we show how to estimate sensitivity indices on the GSobol' test function. When using a reliability problem, it is convenient to create a given algorithm, e.g. Subset sampling, based on a given problem: the Reliability factories shows how to do this for MonteCarlo, FORM, SORM, Subset and FORMimportance sampling.
The library currently has:
 26 reliability problems,
 4 sensitivity problems.
One of the most useful feature of the library is to perform a benchmark that is, loop over the problems. In Benchmark on a given set of problems, we run several algorithms on all the problems. The associated statistics are gathered in table, presented in Benchmark the reliability solvers on the problems. In Check reference probabilities with MonteCarlo, we compare the exact (reference) probability with a MonteCarlo estimate with a large sample.
It is often useful to draw a sensitivity or reliability problem. Since many of these problems have dimensions larger than two, this raises a number of practical issues.
 Event: Draw events shows how to draw an multidimensional event,
 Function: Draw cross cut of functions shows how to draw cross cuts of functions,
 Distribution: Draw cross cuts of distributions shows how to draw cross cuts of distributions and Draw conditional distributions plots conditional distributions.
The "DrawEvent" class that the module provides typically plots the following figure for the RP57.
The following figure plots crosscuts of the limit state function for the RP8.
The Convergence of MonteCarlo to estimate the probability in a reliability problem example might be interesting for those who want to plot convergence graphics.
We provide in BBRC an example which shows how to use the "evaluate" function that the module provides to evaluate a function which is available in the remote BBRC server.
The Examples directory has many other examples: please read the notebooks and see if one of the examples fits your needs.
TODOList

The FORM algorithm does not perform correctly on: RP75, RP111 and Fourbranch serial system. An explanation would be required for this.

The computeCDF() method does not perform correctly on many problems. An explanation would be required for this.
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for otbenchmark0.1.1py3noneany.whl
Algorithm  Hash digest  

SHA256  a4dc7f257f0e74317b2dc581f3fd90a2a7411791bb2a7184c7f0f6e942be6135 

MD5  9153176028b6e11777017d29c800d7c1 

BLAKE2b256  c876e55bbb7be27781af68a85563fd241d17391a313da424d3facb7c234063ad 