Skip to main content

A pytest-like framework for benchmarking

Project description

pybench:

PyPI version PyPI - Downloads License: MIT Tests pre-commit Code style: black Imports: isort

What is it?

pybench is a simple benchmarking framework that mimics pytest syntax. Simply write create files beginning with "bench_" and pybench will discover those files and benchmark all functions starting with "bench_". Internally, pybench relies on Python's standard timeit library to produce benchmark statistics. These statistics, along with metadata such as your platform, available CPUs, RAM, project version, commit id, and more, are stored in a parquet file for further analysis. That way, you have access to the raw data necessary to track performance over time and commits, identify regressions, and any other analysis you may want to do.

Usage:

Dependencies

  • polars
  • toml
  • tqdm

Installing

The easiest way is to install cli-pybench is from PyPI using pip:

pip install cli-pybench

Quickstart

Installing the library will expose a pybench command in your terminal. Although the benchmark directory is configurable, by convention, create a folder called "benchmarks" in your project root. Then, create a file prefixed with "bench_". In that file, write a function starting with "bench_".

def bench_my_sum():
    1 + 1

Then, simply run pybench from your terminal! It should look something like this:

starting benchmark session ...
default config: Config(benchpath='benchmarks', repeat=30, number=1, warmups=0, garbage_collection=False)
running on Linux-5.15.123.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 with x86_64, available cpus: 16, RAM: 10.05GB

/home/cangyuanli/Documents/Projects/cli-pybench/benchmarks/bench_pybench.py
100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 184.31it/s]

Then, a "results.parquet" file representing your most recent run will appear in your "benchmarks/results" folder. In addition, a Hive-partitioned folder will be created (by default partitioned by commit) in "benchmarks/results/historical". If you want to change your configuration, you can do it globally through your "pyproject.toml" file like so:

[tool.pybench]
repeat = 100
number = 10
warmups = 1

To learn more about the repeat and number parameters, see the documentation for timeit.repeat here: https://docs.python.org/3/library/timeit.html.

You can also change your configuration for a specific function through the pybench.config decorator. Here's an example:

import pybench

@pybench.config(repeat=1_000, number=100)
def bench_my_sum():
    1 + 1

pybench provides three other decorators. One is the pybench.skipif decorator. It simply skips the function if the input evaluates to True. This is useful for a variety of reasons, for example, if you have a long-running benchmark that you do not want to run frequently. of course, all decorators can be combined.

import pybench

@pybench.config(repeat=1_000, number=100)
@pybench.skipif(True)
def bench_my_sum():
    1 + 1

Another is pybench.metadata. This attaches arbitrary per-function metadata.

import pybench

@pybench.metadata(group="add")
def bench_my_sum():
    1 + 1

@pybench.metadata(group="add")
def bench_other_sum():
    1 + 1

The final decorator is the pybench.parametrize decorator. This benchmarks your function for each input in a given list of inputs. There are two syntaxes for this. The first is the dictionary syntax.

import pybench

@pybench.parametrize({"a": [1, 2], "b": [5, 8, 9]})
def bench_my_sum(a, b):
    a + b

This will benchmark bench_my_sum for the product of "a" and "b", i.e. every pair of "a" and "b". Users of pytest may be more familiar with the second syntax.

import pybench

@pybench.parametrize(("a", "b"), [(1, 2), (3, 4)])
def bench_my_sum(a, b):
    a + b

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cli_pybench-0.1.0.tar.gz (9.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cli_pybench-0.1.0-py3-none-any.whl (10.2 kB view details)

Uploaded Python 3

File details

Details for the file cli_pybench-0.1.0.tar.gz.

File metadata

  • Download URL: cli_pybench-0.1.0.tar.gz
  • Upload date:
  • Size: 9.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for cli_pybench-0.1.0.tar.gz
Algorithm Hash digest
SHA256 69e74b742cea7e1075d5c580379d221cedf7ebf849cb448ba25fadd18828e394
MD5 2143035c5197d00ff31b8cb53d1a1208
BLAKE2b-256 46add01e1ae6b5e4741119a740583b236d59c8171a91c566c3c0eea0e5063299

See more details on using hashes here.

File details

Details for the file cli_pybench-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: cli_pybench-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 10.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for cli_pybench-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d0ead979ccc57b13d21d7a671b82901f1940a9a586a64928520a02f955e84b29
MD5 e0e08d0f8a24641bd8b9140f8dfb5210
BLAKE2b-256 fcb53966f00cd6fbe8b8c745b5d31a828b2b801064ca845500b285e956dc3173

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page