Skip to main content

Python module to run and analyze benchmarks

Project description

Latest release on the Python Cheeseshop (PyPI) Build status of pyperf on GitHub Actions

The Python pyperf module is a toolkit to write, run and analyze benchmarks.

Features

  • Simple API to run reliable benchmarks

  • Automatically calibrate a benchmark for a time budget.

  • Spawn multiple worker processes.

  • Compute the mean and standard deviation.

  • Detect if a benchmark result seems unstable.

  • JSON format to store benchmark results.

  • Support multiple units: seconds, bytes and integer.

Usage

To run a benchmark use the pyperf timeit command (result written into bench.json):

$ python3 -m pyperf timeit '[1,2]*1000' -o bench.json
.....................
Mean +- std dev: 4.22 us +- 0.08 us

Or write a benchmark script bench.py:

#!/usr/bin/env python3
import pyperf

runner = pyperf.Runner()
runner.timeit(name="sort a sorted list",
              stmt="sorted(s, key=f)",
              setup="f = lambda x: x; s = list(range(1000))")

See the API docs for full details on the timeit function and the Runner class. To run the script and dump the results into a file named bench.json:

$ python3 bench.py -o bench.json

To analyze benchmark results use the pyperf stats command:

$ python3 -m pyperf stats telco.json
Total duration: 29.2 sec
Start date: 2016-10-21 03:14:19
End date: 2016-10-21 03:14:53
Raw value minimum: 177 ms
Raw value maximum: 183 ms

Number of calibration run: 1
Number of run with values: 40
Total number of run: 41

Number of warmup per run: 1
Number of value per run: 3
Loop iterations per value: 8
Total number of values: 120

Minimum:         22.1 ms
Median +- MAD:   22.5 ms +- 0.1 ms
Mean +- std dev: 22.5 ms +- 0.2 ms
Maximum:         22.9 ms

  0th percentile: 22.1 ms (-2% of the mean) -- minimum
  5th percentile: 22.3 ms (-1% of the mean)
 25th percentile: 22.4 ms (-1% of the mean) -- Q1
 50th percentile: 22.5 ms (-0% of the mean) -- median
 75th percentile: 22.7 ms (+1% of the mean) -- Q3
 95th percentile: 22.9 ms (+2% of the mean)
100th percentile: 22.9 ms (+2% of the mean) -- maximum

Number of outlier (out of 22.0 ms..23.0 ms): 0

There’s also:

  • pyperf compare_to command tests if a difference is significant. It supports comparison between multiple benchmark suites (made of multiple benchmarks)

    $ python3 -m pyperf compare_to --table mult_list_py36.json mult_list_py37.json mult_list_py38.json
    +----------------+----------------+-----------------------+-----------------------+
    | Benchmark      | mult_list_py36 | mult_list_py37        | mult_list_py38        |
    +================+================+=======================+=======================+
    | [1]*1000       | 2.13 us        | 2.09 us: 1.02x faster | not significant       |
    +----------------+----------------+-----------------------+-----------------------+
    | [1,2]*1000     | 3.70 us        | 5.28 us: 1.42x slower | 3.18 us: 1.16x faster |
    +----------------+----------------+-----------------------+-----------------------+
    | [1,2,3]*1000   | 4.61 us        | 6.05 us: 1.31x slower | 4.17 us: 1.11x faster |
    +----------------+----------------+-----------------------+-----------------------+
    | Geometric mean | (ref)          | 1.22x slower          | 1.09x faster          |
    +----------------+----------------+-----------------------+-----------------------+
  • pyperf system tune command to tune your system to run stable benchmarks.

  • Automatically collect metadata on the computer and the benchmark: use the pyperf metadata command to display them, or the pyperf collect_metadata command to manually collect them.

  • --track-memory and --tracemalloc options to track the memory usage of a benchmark.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyperf-2.3.1.tar.gz (202.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyperf-2.3.1-py2.py3-none-any.whl (88.9 kB view details)

Uploaded Python 2Python 3

File details

Details for the file pyperf-2.3.1.tar.gz.

File metadata

  • Download URL: pyperf-2.3.1.tar.gz
  • Upload date:
  • Size: 202.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for pyperf-2.3.1.tar.gz
Algorithm Hash digest
SHA256 4ac2e2cf724ab9b525227c3b4a69f16ab5c714ea5bad61c974e0c5d5787238a1
MD5 687e9aa808a7876cfa32ee087a53397a
BLAKE2b-256 d64da1bc52b347c6bc63adcfe162ebb27948d36dc754e330c4032122a94b215d

See more details on using hashes here.

File details

Details for the file pyperf-2.3.1-py2.py3-none-any.whl.

File metadata

  • Download URL: pyperf-2.3.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 88.9 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for pyperf-2.3.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 d4063342373b18fcfa4d28b8f9fd1443012eb8db04b5cb06c1936c0489e2456e
MD5 228cd2ace0f6da1e7da6a4389da40c7d
BLAKE2b-256 793144baf3271db7300a1eb2bd5b6dc9c5e04a4ea6c1bfd244878864d15238c3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page