Skip to main content

Utils for benchmark.

Project description


hide:

  • navigation

Benchmark utils

Utils for benchmark - wrapper over python timeit.

PyPI - Python Version PyPI Status Tests Codecov

Tested on python 3.10 - 3.14

Install

Install from pypi:

pip install benchmark_utils

Or with uv:

uv pip install benchmark_utils

Or install from github repo:

pip install git+https://github.com/ayasyrev/benchmark_utils.git

For development, use uv:

uv pip install -e .

Basic use.

Lets benchmark some (dummy) functions.

output ```python from time import sleep

def func_to_test_1(sleep_time: float = 0.1, mult: int = 1) -> None: """simple 'sleep' func for test""" sleep(sleep_time * mult)

def func_to_test_2(sleep_time: float = 0.11, mult: int = 1) -> None: """simple 'sleep' func for test""" sleep(sleep_time * mult)

output

Let's create benchmark.

output ```python from benchmark_utils import Benchmark ```
output ```python bench = Benchmark( [func_to_test_1, func_to_test_2], ) ```
bench
output
Benchmark(func_to_test_1, func_to_test_2)</details>

Now we can benchmark that functions.

# we can run bench.run() or just:
bench()
output


 Func name  | Sec / run
func_to_test_1:   0.10 0.0%
func_to_test_2:   0.11 -9.1%

We can run it again, all functions, some of it, exclude some and change number of repeats.

bench.run(num_repeats=10)
output


 Func name  | Sec / run
func_to_test_1:   0.10 0.0%
func_to_test_2:   0.11 -9.1%

After run, we can print results - sorted or not, reversed, compare results with best or not.

bench.print_results(reverse=True)
output
 Func name  | Sec / run
func_to_test_2:   0.11 0.0%
func_to_test_1:   0.10 10.0%

We can add functions to benchmark as list of functions (or partial) or as dictionary: {"name": function}.

output ```python bench = Benchmark( [ func_to_test_1, partial(func_to_test_1, 0.12), partial(func_to_test_1, sleep_time=0.11), ] ) ```
bench
output
Benchmark(func_to_test_1, func_to_test_1(0.12), func_to_test_1(sleep_time=0.11))</details>
bench.run()
output


 Func name  | Sec / run
func_to_test_1:   0.10 0.0%
func_to_test_1(sleep_time=0.11):   0.11 -9.1%
func_to_test_1(0.12):   0.12 -16.7%
output ```python bench = Benchmark( { "func_1": func_to_test_1, "func_2": func_to_test_2, } ) ```
bench
output
Benchmark(func_1, func_2)</details>

When we run benchmark script in terminal, we got pretty progress thanks to rich. Lets run example_1.py from example folder:

example_1

BenchmarkIter

With BenchmarkIter we can benchmark functions over iterables, for example read list of files or run functions with different arguments.

output ```python def func_to_test_1(x: int) -> None: """simple 'sleep' func for test""" sleep(0.01)

def func_to_test_2(x: int) -> None: """simple 'sleep' func for test""" sleep(0.015)

dummy_params = list(range(10))

<!-- cell -->
<details open> <summary>output</summary>
```python
from benchmark_utils import BenchmarkIter

bench = BenchmarkIter(
    func=[func_to_test_1, func_to_test_2],
    item_list=dummy_params,
)
```</details>

<!-- cell -->
```python
bench()
output


 Func name  | Items/sec
func_to_test_1:  97.93
func_to_test_2:  65.25

We can run it again, all functions, some of it, exclude some and change number of repeats. And we can limit number of items with num_samples argument: bench.run(num_samples=5)

Multiprocessing

By default we tun functions in one thread. But we can use multiprocessing with multiprocessing=True argument: bench.run(multiprocessing=True) It will use all available cpu cores. And we can use num_workers argument to limit used cpu cores: bench.run(multiprocessing=True, num_workers=2)

bench.run(multiprocessing=True, num_workers=2)
output


 Func name  | Items/sec
func_to_test_1: 173.20
func_to_test_2: 120.80

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchmark_utils-0.2.5.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

benchmark_utils-0.2.5-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file benchmark_utils-0.2.5.tar.gz.

File metadata

  • Download URL: benchmark_utils-0.2.5.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for benchmark_utils-0.2.5.tar.gz
Algorithm Hash digest
SHA256 efcbeed1ac538fcdf567455434a3ed597c7f7c569f7b921abab2dc5293856778
MD5 bd4ea3d49d716352d7e1d119590b8d87
BLAKE2b-256 13c143d06ae912a289fe324871fc1e267d62d3fd3dc327289682c6b9d51f5b37

See more details on using hashes here.

File details

Details for the file benchmark_utils-0.2.5-py3-none-any.whl.

File metadata

File hashes

Hashes for benchmark_utils-0.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 c11ea30ef4988cb259760b7f19c7cc1519137b1267911d023fde42f4a07fe0b9
MD5 72651c8c64e353ce5ef83393f1fe87db
BLAKE2b-256 a5edd1d3b55bd75adc88cc5cc03cd77b8a480acc279ee758129613a291dcdc4d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page