Benchmarking utilities
Project description
This library is intended to make it easy to write small benchmarks and view the results.
Usage
See examples/basic.py for a full working example.
from pyarkbench import Benchmark, Timer, default_args
class Basic(Benchmark):
def benchmark(self):
with Timer() as m1:
# Do some stuff
pass
with Timer() as m2:
# Do some other stuff
pass
return {
"Metric 1 (ms)": m1.ms_duration,
"Metric 2 (ms)": m2.ms_duration,
}
if __name__ == '__main__':
# Initialize the benchmark and use the default command line args
bench = Basic(*default_args.bench())
# Run the benchmark (will run your code in `benchmark` many times, some to warm up and then some where the timer results are save)
results = bench.run()
# View the raw results
bench.print_results(results)
# See aggregate statistics about the results
bench.print_stats(results, stats=default_args.stats())
# Save the results to a JSON file named based on the benchmark class
bench.save_results(results, out_dir=default_args.save())
API
Benchmark
Benchmark(self, num_runs: int = 10, warmup_runs: int = 1, quiet: bool = False, commit: pybench.benchmarking_utils.Commit = None)
Benchmarks should extend this class and implement the benchmark
method.
benchmark
Benchmark.benchmark(self) -> Dict[str, float]
This method must be implemented in your subclass and returns a dictionary of metric name to the time captured for that metric.
run
Benchmark.run(self) -> Dict[str, Any]
This is the entry point into your benchmark. It will first run benchmark()
self.warmup_runs
times without using the resulting timings, then it will
run benchmark()
self.num_runs
times and return the resulting timings.
print_results
Benchmark.print_results(self, results)
Pretty print the raw results by JSON dumping them.
print_stats
Benchmark.print_stats(self, results, stats=('mean', 'median', 'variance'))
Collects and prints statistics over the results.
save_results
Benchmark.save_results(self, results, out_dir, filename=None)
Save the results gathered from benchmarking and metadata about the commit
to a JSON file named after the type of self
.
cleanup
cleanup()
Churn through a bunch of data, run the garbage collector, and sleep for a second to "reset" the Python interpreter.
default_args
default_args(self, /, *args, **kwargs)
Adds a bunch of default command line arguments to make orchestrating
benchmark runs more convenient. To see all the options, call
default_args.init()
and run the script with the --help
option.
bench
default_args.bench()
Default arguments to be passed to a Benchmark
object
stats
default_args.stats()
Default arguments to be passed to the Benchmark.print_stats
method
save
default_args.save()
Default arguments to be passed to the Benchmark.save_results
method
Timer
Timer(self, /, *args, **kwargs)
Context manager object that will time the execution of the statements it
manages.
self.start
- start time
self.end
- end time
self.ms_duration
- end - start / 1000 / 1000
Commit
Commit(self, time, pr, hash)
Wrapper around a git commit
Developer Notes
To build this package locally, check it out and run
python setup.py develop
To rebuild these docs, run
pip install pydoc-markdown
pydocmd simple pybench.Benchmark+ pybench.cleanup pybench.default_args+ pybench.Timer pybench.Commit
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file pyarkbench-1.0.1.tar.gz
.
File metadata
- Download URL: pyarkbench-1.0.1.tar.gz
- Upload date:
- Size: 6.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.40.0 CPython/3.7.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ea89fab273dbcdda4f6c807359c7d0d3cf7f7c666656e67636c19c254e89c22a |
|
MD5 | a0e7c929f7642602acbb2021000ca37f |
|
BLAKE2b-256 | 1219370401ea4219ced3f4302e707d1a319a16b89a16e4fc574a1bcb22dbafd8 |