Skip to main content

Generate a benchmark website from a set of benchmark tests

Project description

GenBenchSite

What is GenBenchSite ▶️

GenBenchSite is a platform designed to compare the performance of different libraries and frameworks. It provides a detailed report of each library's speed, precision, and other parameters, making it easier for developers to choose the best library for their project.

You can access the following information on this pages too with extra details : https://white-on.github.io/BenchSite/

GenBenchSite Briefly explained 📰

GenBenchSite is designed to automate the process of comparing and testing different libraries. To achieve this, the test are written in configuration files, then given to GenBenchSite. The tests are executed in a controlled environment, and the results are then analyzed and compiled into easy-to-read reports. These reports are output as HTML files structured and then published on dedicated GitHub pages, where users can access them and see how the different libraries perform in a variety of scenarios. By automating the testing process, the website enables developers to save time and effort when evaluating libraries, and helps them make informed decisions.

graph explaning the structure of the directory needed to create a benchmark

First steps 👣

First of all, you'll need to get the project on your computer. To do so, you can either download the project directly from GitHub, or clone it using the following command:

git clone https://github.com/White-On/BenchSite

Once you have the project on your computer, you can start creating your own benchmark. To do so, you'll need to create a new directory with a specific structure. The directory should contain 3 subdirectories: targets, themes, and site. The targets directory contains the configuration files for the libraries you want to test. The themes directory contains the configuration files for the tests you want to run. The site directory contains the configuration files for the website.

For more information on the structure of the directory graph explaning the structure of the directory needed to create a benchmark

Setup and Launch 🚀

Once you've installed the project and create your benchmark, we're going to need to install all the required libraries. We recommend to use a virtual environment.

To ease your install, there is a Makefile. To see available commands run:

make help

To install all the required libraries, run:

make install

You're now ready to launch your benchmark. Depending on where your benchmark is located, either locally or online, on a github repository, or if you want to publish the results on a github page, you'll need to run a different command. To see available commands run:

python main.py --help

How we compare the targets 🤔

For the time being, we decided to compare results base on the Lexicographic Maximal Ordering Algorithm (LexMax). Each ranking is based on the number of wins, ties, and losses of each library. The target with the highest number of wins is ranked first, followed by the library with the second-highest number of wins, and so on. In the case of a tie, both libraries are ranked equally. The algorithm does not take into account the magnitude of the wins or losses, only the number of them.

We use it to compare all the data generated by the benchmarking process. For example, we run a task on a set of libraries, and we get the results. Each result is compared to the other result with the same argument, and we get a score for each argument. On the entire task, we get a vector of score for each library. We use the LexMax algorithm to compare the vector of score for each library and we get a ranking of the libraries for that task. We do this for each task and repeat it for the theme and the global ranking.

How to contribute ✍️

The benchmark website is an open-source project, and contributions from the community are welcome. To contribute, users can fork the project on GitHub, make changes to the code, and submit a pull request. Users can also contribute by reporting bugs, suggesting improvements, or sharing their benchmarking results.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

genbenchsite-0.0.1.tar.gz (65.3 kB view details)

Uploaded Source

Built Distribution

genbenchsite-0.0.1-py3-none-any.whl (89.6 kB view details)

Uploaded Python 3

File details

Details for the file genbenchsite-0.0.1.tar.gz.

File metadata

  • Download URL: genbenchsite-0.0.1.tar.gz
  • Upload date:
  • Size: 65.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.4

File hashes

Hashes for genbenchsite-0.0.1.tar.gz
Algorithm Hash digest
SHA256 33d61c947be4d855ed450cbe1ae961b5a416c241c604b5728d0f5353341f77a2
MD5 9f8456df58f9c185587b6f2af140bb6c
BLAKE2b-256 5d443b89b5f16170c1c8068aac3f211a8d96fb5c286966f6f93a41364f3e64c7

See more details on using hashes here.

File details

Details for the file genbenchsite-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for genbenchsite-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 cf056e993fed85c7ce94f02df1da6bb267d387a4a1b8d8450b3d1f4dc2bfe5e8
MD5 c98108220c44431048a8cc63c9225f64
BLAKE2b-256 428cf21987a9cd37ff1f17d7f8e17f229d5aaa9157baf5738c28c4f07d67ebdb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page