Skip to main content

A ``py.test`` fixture for benchmarking code.

Project description

docs

Documentation Status

tests

Travis-CI Build Status AppVeyor Build Status Requirements Status Coverage Status Coverage Status
Scrutinizer Status Codacy Code Quality Status CodeClimate Quality Status

package

PyPI Package latest release PyPI Package monthly downloads PyPI Wheel Supported versions Supported implementations

A py.test fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer. See calibration and FAQ.

  • Free software: BSD license

Installation

pip install pytest-benchmark

Documentation

Available at: pytest-benchmark.readthedocs.org.

Examples

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Example:

def something(duration=0.000001):
    """
    Function that needs some serious benchmarking.
    """
    time.sleep(duration)
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

You can also pass extra arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.02)

Or even keyword arguments:

def test_my_stuff(benchmark):
    benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

def test_my_stuff(benchmark):
    @benchmark
    def something():  # unnecessary function call
        time.sleep(0.000001)

A better way is to just benchmark the final function:

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.000001)  # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there’s a special mode - pedantic:

def my_special_setup():
    ...

def test_with_setup(benchmark):
    benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)

Screenshots

Normal run:

Screenshot of py.test summary

Compare mode (--benchmark-compare):

Screenshot of py.test summary in compare mode

Histogram (--benchmark-histogram):

Histogram sample

Also, it has nice tooltips.

Development

To run the all tests run:

tox

Credits

Changelog

3.0.0 (2015-08-11)

  • Improved --help text for --benchmark-histogram, --benchmark-save and --benchmark-autosave.

  • Benchmarks that raised exceptions during test now have special highlighting in result table (red background).

  • Benchmarks that raised exceptions are not included in the saved data anymore (you can still get the old behavior back by implementing pytest_benchmark_generate_json in your conftest.py).

  • The plugin will use pytest’s warning system for warnings. There are 2 categories: WBENCHMARK-C (compare mode issues) and WBENCHMARK-U (usage issues).

  • The red warnings are only shown if --benchmark-verbose is used. They still will be always be shown in the pytest-warnings section.

  • Using the benchmark fixture more than one time is disallowed (will raise exception).

  • Not using the benchmark fixutre (but requiring it) will issue a warning (WBENCHMARK-U1).

3.0.0rc1 (2015-10-25)

  • Changed --benchmark-warmup to take optional value and automatically activate on PyPy (default value is auto). MAY BE BACKWARDS INCOMPATIBLE

  • Removed the version check in compare mode (previously there was a warning if current version is lower than what’s in the file).

3.0.0b3 (2015-10-22)

  • Changed how comparison is displayed in the result table. Now previous runs are shown as normal runs and names get a special suffix indicating the origin. Eg: “test_foobar (NOW)” or “test_foobar (0123)”.

  • Fixed sorting in the result table. Now rows are sorted by the sort column, and then by name.

  • Show the plugin version in the header section.

  • Moved the display of default options in the header section.

3.0.0b2 (2015-10-17)

  • Add a --benchmark-disable option. It’s automatically activated when xdist is on

  • When xdist is on or statistics can’t be imported then --benchmark-disable is automatically activated (instead of --benchmark-skip). BACKWARDS INCOMPATIBLE

  • Replace the deprecated __multicall__ with the new hookwrapper system.

  • Improved description for --benchmark-max-time.

3.0.0b1 (2015-10-13)

  • Tests are sorted alphabetically in the results table.

  • Failing to import statistics doesn’t create hard failures anymore. Benchmarks are automatically skipped if import failure occurs. This would happen on Python 3.2 (or earlier Python 3).

3.0.0a4 (2015-10-08)

  • Changed how failures to get commit info are handled: now they are soft failures. Previously it made the whole test suite fail, just because you didn’t have git/hg installed.

3.0.0a3 (2015-10-02)

  • Added progress indication when computing stats.

3.0.0a2 (2015-09-30)

  • Fixed accidental output capturing caused by capturemanager misuse.

3.0.0a1 (2015-09-13)

  • Added JSON report saving (the --benchmark-json command line arguments).

  • Added benchmark data storage(the --benchmark-save and --benchmark-autosave command line arguments).

  • Added comparison to previous runs (the --benchmark-compare command line argument).

  • Added performance regression checks (the --benchmark-compare-fail command line argument).

  • Added possibility to group by various parts of test name (the –benchmark-compare-group-by` command line argument).

  • Added historical plotting (the --benchmark-histogram command line argument).

  • Added option to fine tune the calibration (the --benchmark-calibration-precision command line argument and calibration_precision marker option).

  • Changed benchmark_weave to no longer be a context manager. Cleanup is performed automatically. BACKWARDS INCOMPATIBLE

  • Added benchmark.weave method (alternative to benchmark_weave fixture).

  • Added new hooks to allow customization:

    • pytest_benchmark_generate_machine_info(config)

    • pytest_benchmark_update_machine_info(config, info)

    • pytest_benchmark_generate_commit_info(config)

    • pytest_benchmark_update_commit_info(config, info)

    • pytest_benchmark_group_stats(config, benchmarks, group_by)

    • pytest_benchmark_generate_json(config, benchmarks, include_data)

    • pytest_benchmark_update_json(config, benchmarks, output_json)

    • pytest_benchmark_compare_machine_info(config, benchmarksession, machine_info, compared_benchmark)

  • Changed the timing code to:

    • Tracers are automatically disabled when running the test function (like coverage tracers).

    • Fixed an issue with calibration code getting stuck.

  • Added pedantic mode via benchmark.pedantic(). This mode disables calibration and allows a setup function.

2.5.0 (2015-06-20)

  • Improved test suite a bit (not using cram anymore).

  • Improved help text on the --benchmark-warmup option.

  • Made warmup_iterations available as a marker argument (eg: @pytest.mark.benchmark(warmup_iterations=1234)).

  • Fixed --benchmark-verbose’s printouts to work properly with output capturing.

  • Changed how warmup iterations are computed (now number of total iterations is used, instead of just the rounds).

  • Fixed a bug where calibration would run forever.

  • Disabled red/green coloring (it was kinda random) when there’s a single test in the results table.

2.4.1 (2015-03-16)

  • Fix regression, plugin was raising ValueError: no option named 'dist' when xdist wasn’t installed.

2.4.0 (2015-03-12)

  • Add a benchmark_weave experimental fixture.

  • Fix internal failures when xdist plugin is active.

  • Automatically disable benchmarks if xdist is active.

2.3.0 (2014-12-27)

  • Moved the warmup in the calibration phase. Solves issues with benchmarking on PyPy.

    Added a --benchmark-warmup-iterations option to fine-tune that.

2.2.0 (2014-12-26)

  • Make the default rounds smaller (so that variance is more accurate).

  • Show the defaults in the --help section.

2.1.0 (2014-12-20)

  • Simplify the calibration code so that the round is smaller.

  • Add diagnostic output for calibration code (--benchmark-verbose).

2.0.0 (2014-12-19)

  • Replace the context-manager based API with a simple callback interface. BACKWARDS INCOMPATIBLE

  • Implement timer calibration for precise measurements.

1.0.0 (2014-12-15)

  • Use a precise default timer for PyPy.

? (?)

  • Readme and styling fixes (contributed by Marc Abramowitz)

  • Lots of wild changes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest-benchmark-3.0.0.zip (368.1 kB view details)

Uploaded Source

Built Distribution

pytest_benchmark-3.0.0-py2.py3-none-any.whl (35.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file pytest-benchmark-3.0.0.zip.

File metadata

File hashes

Hashes for pytest-benchmark-3.0.0.zip
Algorithm Hash digest
SHA256 cec1d1d259b9869ac306f91936f9607508a119c34f21cca79d50521bc29bf980
MD5 f8ab8e438f039366e3765168ad831b4c
BLAKE2b-256 e8fe3aa22c5bd0aba5fe03de8405478e049c1306f8890b23b48e8f5f060fc75c

See more details on using hashes here.

Provenance

File details

Details for the file pytest_benchmark-3.0.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_benchmark-3.0.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 3560274259a43a6aa55a29932b28c5fd47e6d34dfeb83aa59f4f51d206b014c0
MD5 e2dc83f645967144b732d5f55d8c76e1
BLAKE2b-256 80a70e7d4940d00883c8a6e6084519f8bbd6c53e645fc6579ac35f5b676b55e3

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page