Skip to main content

Estimators of mutual information and distributions used to benchmark them.

Project description

arXiv Venue Project Status: Active – The project has reached a stable, usable state and is being actively developed. PyPI Latest Release build Code style: black

Benchmarking Mutual Information

BMI is the package for estimation of mutual information between continuous random variables and testing new estimators.

Getting started

While we recommend taking a look at the documentation to learn about full package capabilities, below we present the main capabilities of the Python package. (Note that BMI can also be used to test non-Python mutual information estimators.)

You can install the package using:

$ pip install benchmark-mi

Alternatively, you can use the development version from source using:

$ pip install "bmi @ https://github.com/cbg-ethz/bmi"

Note: BMI uses JAX and by default installs the CPU version of it. If you have a device supporting CUDA, you can install the CUDA version of JAX.

Now let's take one of the predefined distributions included in the benchmark (named "tasks") and sample 1,000 data points. Then, we will run two estimators on this task.

import bmi

task = bmi.benchmark.BENCHMARK_TASKS['1v1-normal-0.75']
print(f"Task {task.name} with dimensions {task.dim_x} and {task.dim_y}")
print(f"Ground truth mutual information: {task.mutual_information:.2f}")

X, Y = task.sample(1000, seed=42)

cca = bmi.estimators.CCAMutualInformationEstimator()
print(f"Estimate by CCA: {cca.estimate(X, Y):.2f}")

ksg = bmi.estimators.KSGEnsembleFirstEstimator(neighborhoods=(5,))
print(f"Estimate by KSG: {ksg.estimate(X, Y):.2f}")

Evaluating a new estimator

The above code snippet may be convenient for estimating mutual information on a given data set or for the development of a new mutual information estimator. However, for extensive benchmarking it may be more convenient to use one of the benchmark suites available in the workflows/benchmark/ subdirectory.

For example, you can install Snakemake and run a small benchmark suite on several estimators using:

$ snakemake -c4 -s workflows/benchmark/demo/run.smk

In about a minute it should generate minibenchmark results in the generated/benchmark/demo directory. Note that the configuration file, workflows/benchmark/demo/config.py, explicitly defines the estimators and tasks used, as well as the number of samples.

Hence, it is easy to benchmark a custom estimator by importing it and including it in the configuration dictionary. More information is available here, where we cover evaluating new Python as well as non-Python estimators.

Similarly, it is easy to change the number of samples or adjust the tasks included in the benchmark. We defined several benchmark suites with shared structure.

List of implemented estimators

(Your estimator can be here too! Please, reach out to us if you would like to contribute.)

Citing

✨ New! ✨ On the properties and estimation of pointwise mutual information profiles

arXiv

In this manuscript we discuss the pointwise mutual information profile, an invariant which can be used to diagnose limitations of the previous mutual information benchmark, and a flexible distribution family of Bend and Mix Models. These distributions can be used to create more expressive benchmark tasks and provide model-based Bayesian estimates of mutual information.

Workflows:

@article{pmi-profiles-2023,
   title={On the properties and estimation of pointwise mutual information profiles},
   author = {Czy\.{z}, Pawe{\l}  and Grabowski, Frederic and Vogt, Julia and Beerenwinkel, Niko and Marx, Alexander},
   journal={arXiv preprint arXiv:2310.10240},
   year={2023}
}

Beyond normal: On the evaluation of the mutual information estimators

arXiv Venue Manuscript

In this manuscript we discuss a benchmark for mutual information estimators.

Workflows:

@inproceedings{beyond-normal-2023,
 title = {Beyond Normal: On the Evaluation of Mutual Information Estimators},
 author = {Czy\.{z}, Pawe{\l}  and Grabowski, Frederic and Vogt, Julia and Beerenwinkel, Niko and Marx, Alexander},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
 pages = {16957--16990},
 publisher = {Curran Associates, Inc.},
 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/36b80eae70ff629d667f210e13497edf-Paper-Conference.pdf},
 volume = {36},
 year = {2023}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

benchmark_mi-0.1.3.tar.gz (54.2 kB view details)

Uploaded Source

Built Distribution

benchmark_mi-0.1.3-py3-none-any.whl (76.9 kB view details)

Uploaded Python 3

File details

Details for the file benchmark_mi-0.1.3.tar.gz.

File metadata

  • Download URL: benchmark_mi-0.1.3.tar.gz
  • Upload date:
  • Size: 54.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/6.5.0-44-generic

File hashes

Hashes for benchmark_mi-0.1.3.tar.gz
Algorithm Hash digest
SHA256 f8f9b46c631c56c15c1139e300fa39c1b90b8643a4e2877fedcce98886386e4d
MD5 08759d90c56489a05cc717450611f3dc
BLAKE2b-256 34c935f26925256b85da3662ca6317d00b1b4901b6dfd8ea2cf94e059cabd896

See more details on using hashes here.

File details

Details for the file benchmark_mi-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: benchmark_mi-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 76.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.10.6 Linux/6.5.0-44-generic

File hashes

Hashes for benchmark_mi-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0d03f14fcaadff51b985ca4fe58d574a8340c7956619c445e5dbc89be0d2b1fa
MD5 47032bce4a4a9064e96c18ffc886ba7c
BLAKE2b-256 9d95ca720d6efbefc61586138511a5349a377a7ac074b22628fcbe9376a04dbb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page