Skip to main content

Benchmarking utilities

Project description

This library is intended to make it easy to write small benchmarks and view the results.


See examples/ for a full working example.

from pyarkbench import Benchmark, Timer, default_args

class Basic(Benchmark):
    def benchmark(self):
        with Timer() as m1:
            # Do some stuff

        with Timer() as m2:
            # Do some other stuff

        return {
            "Metric 1 (ms)": m1.ms_duration,
            "Metric 2 (ms)": m2.ms_duration,

if __name__ == '__main__':
    # Initialize the benchmark and use the default command line args
    bench = Basic(*default_args.bench())

    # Run the benchmark (will run your code in `benchmark` many times, some to warm up and then some where the timer results are save)
    results =

    # View the raw results

    # See aggregate statistics about the results
    bench.print_stats(results, stats=default_args.stats())

    # Save the results to a JSON file named based on the benchmark class



Benchmark(self, num_runs: int = 10, warmup_runs: int = 1, quiet: bool = False, commit: pybench.benchmarking_utils.Commit = None)

Benchmarks should extend this class and implement the benchmark method.


Benchmark.benchmark(self) -> Dict[str, float]

This method must be implemented in your subclass and returns a dictionary of metric name to the time captured for that metric.

run -> Dict[str, Any]

This is the entry point into your benchmark. It will first run benchmark() self.warmup_runs times without using the resulting timings, then it will run benchmark() self.num_runs times and return the resulting timings.


Benchmark.print_results(self, results)

Pretty print the raw results by JSON dumping them.


Benchmark.print_stats(self, results, stats=('mean', 'median', 'variance'))

Collects and prints statistics over the results.


Benchmark.save_results(self, results, out_dir, filename=None)

Save the results gathered from benchmarking and metadata about the commit to a JSON file named after the type of self.



Churn through a bunch of data, run the garbage collector, and sleep for a second to "reset" the Python interpreter.


default_args(self, /, *args, **kwargs)

Adds a bunch of default command line arguments to make orchestrating benchmark runs more convenient. To see all the options, call default_args.init() and run the script with the --help option.



Default arguments to be passed to a Benchmark object



Default arguments to be passed to the Benchmark.print_stats method


Default arguments to be passed to the Benchmark.save_results method


Timer(self, /, *args, **kwargs)

Context manager object that will time the execution of the statements it manages. self.start - start time self.end - end time self.ms_duration - end - start / 1000 / 1000


Commit(self, time, pr, hash)

Wrapper around a git commit

Developer Notes

To build this package locally, check it out and run

python develop

To rebuild these docs, run

pip install pydoc-markdown
pydocmd simple pybench.Benchmark+ pybench.cleanup pybench.default_args+ pybench.Timer pybench.Commit

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for pyarkbench, version 1.0.1
Filename, size File type Python version Upload date Hashes
Filename, size pyarkbench-1.0.1.tar.gz (6.3 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page