Skip to main content

Tools for measuring sensitivity and diversity of multi-task benchmarks.

Project description

BenchBench is a Python package that provides a suite of tools to evaluate multi-task benchmarks focusing on task diversity and sensitivity to irrelevant changes.

Research shows that for all multi-task benchmarks there is a trade-off between task diversity and sensitivity. The more diverse a benchmark, the more sensitive its ranking is to irrelevant changes. Irrelevant changes are things like introducing weak models, or changing the metric in ways that shouldn't matter.

Based on BenchBench, we're maintaining a living benchmark of multi-task benchmarks. Visit the project page to see the results or contribute your own benchmark.

Please see our paper for all relevant background and scientific results. Cite as:

@inproceedings{zhang2024inherent,
  title={Inherent Trade-Offs between Diversity and Stability in Multi-Task Benchmarks},
  author={Guanhua Zhang and Moritz Hardt},
  booktitle={International Conference on Machine Learning},
  year={2024}
}

Quick Start

To install the package, simply run:

pip install benchbench

Example Usage

To evaluate a cardinal benchmark, you can use the following code:

from benchbench.data import load_cardinal_benchmark
from benchbench.measures.cardinal import get_diversity, get_sensitivity

data, cols = load_cardinal_benchmark('GLUE')
diversity = get_diversity(data, cols)
sensitivity = get_sensitivity(data, cols)

To evaluate an ordinal benchmark, you can use the following code:

from benchbench.data import load_ordinal_benchmark
from benchbench.measures.ordinal import get_diversity, get_sensitivity

data, cols = load_ordinal_benchmark('HELM-accuracy')
diversity = get_diversity(data, cols)
sensitivity = get_sensitivity(data, cols)

To use your own benchmark, you just need to provide a pandas DataFrame and a list of columns indicating the tasks. Check the documentation for more details.

Reproduce the results from our paper

You can reproduce the figures from our paper using the following Colabs:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchbench-1.0.1.tar.gz (219.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

benchbench-1.0.1-py3-none-any.whl (258.0 kB view details)

Uploaded Python 3

File details

Details for the file benchbench-1.0.1.tar.gz.

File metadata

  • Download URL: benchbench-1.0.1.tar.gz
  • Upload date:
  • Size: 219.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for benchbench-1.0.1.tar.gz
Algorithm Hash digest
SHA256 d99300bdad2fde1266f7f23e3fc83be9bde1b6d732592e7b7f68b585afd5c58f
MD5 f0d957bb7b0241c8add422b1a187926f
BLAKE2b-256 d7cf6010578bf82477171fcd1bcb45937467e73a80c0fc06edc18ffd90cd295f

See more details on using hashes here.

File details

Details for the file benchbench-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: benchbench-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 258.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.6

File hashes

Hashes for benchbench-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1d35a18e139b19b4e980b174b4d06f20ed48c396ce6c87ad4e9b0a47e48eceb8
MD5 b8d1070d437d8768a38fdb195fc55dae
BLAKE2b-256 310d6bd2697294635d80216543e4caaa931ebb7e74e4bfe7942c30e5c62ac2a0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page