Skip to main content

A utility for Automated BEnchmark Distribution

Project description

Abed is an automated system for benchmarking machine learning algorithms. It is created for running experiments where it is desired to run multiple methods on multiple datasets using multiple parameters. It includes automated processing of result files into result tables. Abed was designed for use with the Dutch LISA supercomputer, but can hopefully be used on any Torque compute cluster.

Abed was created as a way to automate all the tedious work necessary to set up proper benchmarking experiments. It also removes much of the hassle by using a single configuration file for the experimental setup. A core feature of Abed is that it doesn’t care about which language the tested methods are written in.

Abed can create output tables as either simple txt files, or as html pages using the excellent DataTables plugin. To support offline operation the necessary DataTables files are packaged with Abed.

Documentation

For Abed’s documentation, see the documentation.

Screenshots

Rank plots in Abed Result tables in Abed Result tables in Abed (time)

Notes

The current version of Abed is very usable. However, it is still considered beta software, as it is not yet completely documented and some robustness improvements are planned. For a similar and more mature project which works with R see: BatchExperiments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
abed-0.0.3.tar.gz (1.1 MB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page