No project description provided
Project description
Scientific software development often relies on stochasticity, e.g. for Monte Carlo integration or simulating the Ising model. Testing non-deterministic code is difficult. This package offers a bootstrap test to validate stochastic algorithms.
For example, suppose we want to implement the expected value of log-normal distribution with location parameter \mu
and scale parameter \sigma
.
>>> import numpy as np
>>>
>>> def lognormal_expectation(mu, sigma):
... return np.exp(mu + sigma ** 2 / 2)
>>>
>>> def lognormal_expectation_wrong(mu, sigma):
... return np.exp(mu + sigma ** 2)
We can validate our implementation by simulating from a lognormal distribution and comparing with the bootstrapped mean.
>>> from pytest_bootstrap import bootstrap_test
>>>
>>> mu = -1
>>> sigma = 1
>>> reference = lognormal_expectation(mu, sigma)
>>> x = np.exp(np.random.normal(mu, sigma, 1000))
>>> result = bootstrap_test(x, np.mean, reference)
This returns a summary of the test, such as the bootstrapped statistics.
>>> result.keys()
dict_keys(['alpha', 'reference', 'lower', 'upper', 'z_score', 'median', 'iqr', 'statistics'])
Comparing with our incorrect implementation reveals the bug.
>>> reference_wrong = lognormal_expectation_wrong(mu, sigma)
>>> result = bootstrap_test(x, np.mean, reference_wrong)
Traceback (most recent call last):
...
pytest_bootstrap.BootstrapTestError: {'alpha': 0.01, reference: 1.0, ...
Visualising the bootstrapped distribution can help identify discrepancies between the bootstrapped statistics and the theoretical reference value.
:code:`` examples/lognormal.py
Interface
.. automodule:: pytest_bootstrap
- members:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.