Python benchmark tool
Project description
bma-benchmark-py
Python benchmark library
Installation
pip3 install bma_benchmark
Usage
from bma_benchmark import benchmark
@benchmark(base=True)
def benchmark1():
# do some job
1 + 1
@benchmark(name='benchmark2')
def my_function2():
# do some job
1 * 1
benchmark.run()
# equal to
#benchmark.run(
# number=100, precision=3, units='s', print_result=True, sort_result='desc')
Decorator arguments
-
name override benchmark name (default: function name)
-
base use as the base benchmark to measure difference from (True/False)
Benchmark.run() arguments
-
number number of function executions
-
precision digits after comma for numbers
-
units min/max/avg units (s, ms, us or ns)
-
print_result automatically print results (True/False)
-
sort_result sort result in ascending (asc/a) or descending(desc/d) order, None to keep unsorted
Calling sub-processes
The module can call sub-processes to measure difference between different library versions in different virtual environments or between different versions of Python interpreter.
Primary script
from bma_benchmark import benchmark
# define some local benchmarks if required
@benchmark
def bench1():
pass
benchmark.append_sub('./sub1.py')
benchmark.run()
Secondary scripts
make sure the scripts have execution permission (chmod +x):
#!/path/to/some/other/python
from bma_benchmark import benchmark
# define benchmarks
@benchmark
def bench2():
pass
benchmark.sub()
The secondary scripts can contain "base" argument in a decorator as well. The "sub()" method outputs benchmark results to stdout in JSON format to synchronize with the primary, so secondary benchmarks must not print anything.
Multiple benchmarks in the same script
"benchmark" is the default benchmark object. Custom objects can be created from the "Benchmark class".
from bma_benchmark import Benchmark
my_bench = Benchmark()
@my_bench
def bench():
pass
Storing results from multiple benchmarks
Run a benchmark script with "OUT" OS variable:
OUT=b1.json python script.py
The benchmark result will be saved into "b1.json" file. After, multiple benchmarks can be combined into a single table with "bma-benchmark" console tool:
bma-benchmark b1.json b2.json
The tool automatically adds benchmark prefixes as "FILE_NAME.". It also has got options to specify precision, units etc. (run "bma-benchmark -h" to get list of all options)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file bma_benchmark-0.0.6.tar.gz
.
File metadata
- Download URL: bma_benchmark-0.0.6.tar.gz
- Upload date:
- Size: 5.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.26.0 setuptools/60.5.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3e06491fa727f4bf4a1010061410af77025cf1dbb91861883e7a304f3e1150d1 |
|
MD5 | cf10043126e0316dbe47c845f353eac6 |
|
BLAKE2b-256 | 75f18130761765e0b86c0de7c17f6e5b23afae1e0d908aefec9ce6e226812e4a |