Skip to main content

A very simple and basic benchmarking tool for timing functions.

Project description

basicbenchmark

PyPI - Version PyPI - Python Version tests and linting codecov

basicbenchmark is a simple benchmarking tool for timing Python callables.

It's so basic, that probably you don't need it. But if it saves you a moment, here it is.

It runs a callable a number of times and returns the average time it took. Optionally, you can also get some basic statistics: fastest and slowest run time, and the standard deviation.

No dependencies, just a simple wrapper around Python standard library timeit.Timer and time.perf_counter.

Installation

pip install basicbenchmark

Usage

The package provides a decorator function, @basicbenchmark, that can be used to time functions each time they are called.

In addition, there are two functions that can be called directly: benchmark and benchmark_stats. These are useful for timing functions, keeping timing benchmarks separate from source code and storing the results in variables.

Decorator

Add the @basicbenchmark decorator to a function to time the execution and write the results to the console.

from basicbenchmark import basicbenchmark

@basicbenchmark
def my_function(x, y=2):
    return x**y

result = my_function(x=2) # result is 4

my_function: 1 runs, mean time per run: 2.50 µs.

You can specify the number of runs to perform by passing the n_runs argument to the decorator. It will in addition print the fastest and slowest run times, along with the standard deviation.

@basicbenchmark(n_runs=100_000)
def my_function(x, y=2):
    return x**y

result = my_function(x=2) # result is 4
my_function: 100,000 runs, mean time per run: 385.44±700.90 ns.
	Fastest run: 200.00 ns. Slowest run: 123700.00 ns.

You can also pass the pre_run=True argument to the decorator, which will run the function once before starting the benchmark. This is useful for correctly timing JIT compiled functions, for example.

Benchmarking functions

The two functions that can be called directly, benchmark and benchmark_stats, are used to time a callable and return the timing results.

The benchmark function takes a callable, optional arguments and keyword arguments, and runs it a number of times, returning only the average time it took to run the callable in seconds. It supports auto ranging the number of runs to get an accurate result, as it is a wrapper around timeit.Timer, or you can specify the number of runs yourself with the n_runs argument.

The benchmark_stats function takes the same input arguments as benchmark, but instead return a dictionary with timing results and the function return value. The timing results in the dictionary include the average time, the standard deviation, the fastest and the slowest time in seconds. This is useful if you want to know more about the distribution of the times it took to run the callable, and not just the average. In some cases the fastest time is of more interest than the average.

Both functions print the results to the console in more readable time units, but this can be disabled by setting the print_result argument to False. Both functions also support a pre_run argument, which runs the callable once before performing the benchmark. Useful for example when calling JIT compiled functions.

from basicbenchmark import benchmark, benchmark_stats

def my_function(x, y=2):
    return x**y

The most basic way to time a function is to use the benchmark function, which uses auto ranging to decide the number of runs required (if argument n_runs is not passed), and allows for passing arguments and keyword arguments to the callable. The example below times the function my_function with x=2 and y=2, and returns the average time to avg_time:

avg_time = benchmark(my_function, args=(2,))

Result is printed to the console

my_function: 5,000,000 runs, mean time per run: 72.37 ns.

To also get some basic statistics, use the benchmark_stats function. This function does not support automatically determining the number of runs, so you need to specify the number of runs yourself. The example below uses the benchmark_stats with keyword arguments to pass to the callable:

benchmark_result = benchmark_stats(my_function, kwargs={'x': 100, 'y': 3}, n_runs=100_000)

It returns the following dictionary:

benchmark_result
{'return_value': 1000000,
 'mean': 2.2017200006757777e-07,
 'stdev': 1.883448346591418e-05,
 'min': 9.999985195463523e-08,
 'max': 0.005955899999662506}

And prints to console:

my_function: 100,000 runs, mean time per run: 220.17±18834.48 ns.
	Fastest run: 100.00 ns. Slowest run: 5955900.00 ns.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

basicbenchmark-0.2.0.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

basicbenchmark-0.2.0-py3-none-any.whl (7.2 kB view details)

Uploaded Python 3

File details

Details for the file basicbenchmark-0.2.0.tar.gz.

File metadata

  • Download URL: basicbenchmark-0.2.0.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for basicbenchmark-0.2.0.tar.gz
Algorithm Hash digest
SHA256 a1dfde90d1b5e555532aaffb8214b2b8c8e56b71be3c380c817a307e58bd4018
MD5 318af5317eb893cc5965595cbc183570
BLAKE2b-256 0db6e2920a26b9e943e586a595c0e55e0d9f2ce3044d91e84f1453be6ccae4b7

See more details on using hashes here.

Provenance

The following attestation bundles were made for basicbenchmark-0.2.0.tar.gz:

Publisher: publish_pypi.yml on rhkarls/basicbenchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file basicbenchmark-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: basicbenchmark-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 7.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for basicbenchmark-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 659a3e77d13649f9cb51561cb2e8eb700608c7e9018e42315b7cb8d1cb937caf
MD5 d6fd796cc44c1c31aa539114ab3851ba
BLAKE2b-256 56af4264378380e36f52ab110f4a4ff37ac91856fb88a2d1f0762099060b35e3

See more details on using hashes here.

Provenance

The following attestation bundles were made for basicbenchmark-0.2.0-py3-none-any.whl:

Publisher: publish_pypi.yml on rhkarls/basicbenchmark

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page