Advanced benchmarking tool
An advanced benchmarking tool written in python3 that supports binary randomization and the generation of visually appealing reports.
It runs on sufficiently new linux systems and (rudimentary) on Apple’s OS X systems.
The development started as part of my bachelor thesis in october 2015. The bachelor thesis (written in german) can be found here.
temci allows you to easily measure the execution time (and other things) of programs and compare them against each other resulting in a pretty HTML5 based report. Furthermore it set’s up the environment to ensure benchmarking results with a low variance and use some kind of assembly randomisation to reduce the effect of caching.
Installing temci on linux systems should be possible by just installing it via pip3:
pip3 install temci
To simplify using temci, enable tab completion for your favorite shell (bash and zsh are supported) by adding the following line to your bash or zsh configuration file
source `temci_completion [bash|zsh]`
If you can’t install temci via pip3, using it to benchmark programs is possible by using temci/scripts/run instead of temci (execute this file with your favorite python3 interpreter directly if this interpreter isn’t located at /usr/bin/python3).
Side note: This tool needs root privileges for some benchmarking features. If you’re not root, it will not fail, but it will warn you and disable the features.
There are currently two good ways to explore the features of temci: 1. Play around with temci using the provided tab completion for zsh (preferred) and bash 2. Look into the annotated settings file (it can be generated via temci init settings).
A user guide is planned. Until it’s finished consider reading the code documentation.
A documentation of all command line commands and options is given in the documentation for the cli module.
A documentation for all available run drivers, runners and run driver plugins is given in the documentation for the run module
The status of the documentation is given in the section Status of the documentation.
Or: How to benchmarking a simple program called ls (a program is every valid shell code that is executable by /bin/sh)
There are two ways to benchmark a program: A short and a long one.
The short one first: Just type:
temci short exec -wd "ls" --runs 100 --out out.yaml
The long one now: Just type
temci init run_config
This let’s you create a temci run config file by using a textual interface (if you don’t want to create it entirely by hand). To actually run the configuration type:
temci exec [file you stored the run config in] --out out.yaml
Now you have a YAML result file that has the following structure:
- attributes: description: ls data: … task-clock: - [first measurement for property task-clock] - … …
You can either create a report by parsing the YAML file yourself or by using the temci report tool. To use the latter type:
temci report out.yaml --reporter html2 --html2_out ls_report
Now you have a report on the performance of ls.
The problem in naming programs is that most good program names are already taken. A good program or project name has (in my opinion) the following properties: - it shouldn’t be used on the relevant platforms (in this case: github and pypi) - it should be short (no one want’s to type long program names) - it should be pronounceable - it should have at least something to do with the program temci is such a name. It’s lojban for time (i.e. the time duration between to moments or events).