Utils for benchmark.
Project description
Benchmark utils
Utils for benchmark - wrapper over python timeit.
Tested on python 3.7 - 3.12
Install
Install from pypi:
pip install benchmark_utils
Or install from github repo:
pip install git+https://github.com/ayasyrev/benchmark_utils.git
Basic use.
Lets benchmark some (dummy) functions.
from time import sleep
def func_to_test_1(sleep_time: float = 0.1, mult: int = 1) -> None:
"""simple 'sleep' func for test"""
sleep(sleep_time * mult)
def func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None:
"""simple 'sleep' func for test"""
sleep(sleep_time * mult)
Let's create benchmark.
from benchmark_utils import Benchmark
bench = Benchmark(
[func_to_test_1, func_to_test_2],
)
bench
output
Benchmark(func_to_test_1, func_to_test_2)
Now we can benchmark that functions.
# we can run bench.run() or just:
bench()
output
Func name | Sec / run
func_to_test_1: 0.10 0.0%
func_to_test_2: 0.11 -8.9%
We can run it again, all functions, some of it, exclude some and change number of repeats.
bench.run(num_repeats=10)
output
Func name | Sec / run
func_to_test_1: 0.10 0.0%
func_to_test_2: 0.11 -9.3%
After run, we can prunt results - sorted or not, reversed, compare results with best or not.
bench.print_results(reverse=True)
output
Func name | Sec / run
func_to_test_2: 0.11 0.0%
func_to_test_1: 0.10 10.2%
We can add functions to benchmark as list of functions (or partial) or as dictionary: {"name": function}
.
bench = Benchmark(
[
func_to_test_1,
partial(func_to_test_1, 0.12),
partial(func_to_test_1, sleep_time=0.11),
]
)
bench
output
Benchmark(func_to_test_1, func_to_test_1(0.12), func_to_test_1(sleep_time=0.11))
bench.run()
output
Func name | Sec / run
func_to_test_1: 0.10 0.0%
func_to_test_1(sleep_time=0.11): 0.11 -8.8%
func_to_test_1(0.12): 0.12 -16.4%
bench = Benchmark(
{
"func_1": func_to_test_1,
"func_2": func_to_test_2,
}
)
bench
output
Benchmark(func_1, func_2)
When we run benchmark script in terminal, we got pretty progress thanks to rich. Lets run example_1.py from example folder:
BenchmarkIter
With BenchmarkIter we can benchmark functions over iterables, for example read list of files or run functions with different arguments.
def func_to_test_1(x: int) -> None:
"""simple 'sleep' func for test"""
sleep(0.01)
def func_to_test_2(x: int) -> None:
"""simple 'sleep' func for test"""
sleep(0.015)
dummy_params = list(range(10))
from benchmark_utils import BenchmarkIter
bench = BenchmarkIter(
func=[func_to_test_1, func_to_test_2],
item_list=dummy_params,
)
bench()
output
Func name | Items/sec
We can run it again, all functions, some of it, exclude some and change number of repeats.
And we can limit number of items with num_samples
argument:
bench.run(num_samples=5)
Multiprocessing
By default we tun functions in one thread.
But we can use multiprocessing with multiprocessing=True
argument:
bench.run(multiprocessing=True)
It will use all available cpu cores.
And we can use num_workers
argument to limit used cpu cores:
bench.run(multiprocessing=True, num_workers=2)
bench.run(multiprocessing=True, num_workers=2)
output
Func name | Items/sec
0.1022316165981465 / 0.09319195459829643
output
1.0970004550158379
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for benchmark_utils-0.2.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 46eaaf7eb3fb8a20dc6fc4474e4bfa14d3a1961daf41581599d47e4b7e9083c6 |
|
MD5 | dce3aeb6d5f83d0021d55eee055cc8ea |
|
BLAKE2b-256 | 6bab6d8fb9db3d127706c14fab648cb8d9b71e2337f937551ff532f4665a81db |