Skip to main content

Compute the mutual information between two clusterings of the same objects

Project description

clustering-mi

Documentation Status PyPI version PyPI platforms

Mutual information between clusterings

Maximilian Jerdee, Alec Kirkley, and Mark Newman

A Python package for computing the mutual information between two clusterings of the same set of objects. This implementation includes multiple variations and normalizations of the mutual information.

The package implements the reduced mutual information (RMI) as described in Jerdee, Kirkley, and Newman (2024), which corrects the standard measure's bias towards labelings with too many groups. The asymmetric normalization of Jerdee, Kirkley, and Newman (2023) is also included to remove the biases of symmetric normalizations. Data used to generate the figures in those papers is available in the examples/data directory of the repository.

Installation

clustering-mi can be installed through pip:

pip install clustering-mi

or built locally by cloning this repository and running

pip install .

in the base directory.

Typical usage

Once installed, the package can be imported as

import clustering_mi as cmi

Note that this is not import clustering-mi.

Two clusterings (or "labelings") can be loaded in several ways; the names of the groups are irrelevant:

# As arrays:
labels1 = ["red", "red", "red", "blue", "blue", "blue", "green", "green"]
labels2 = [1, 1, 1, 1, 2, 2, 2, 2]

# As a contingency table, i.e., a matrix that counts label co-occurrences.
# Columns are the first labeling, rows are the second labeling:
contingency_table = [[3, 1, 0], [0, 2, 2]]

# Or as a space-separated file:
"""
red 1
red 1
red 1
blue 1
blue 2
blue 2
green 2
green 2
"""
filename = "data/example.txt"

The package can then compute the mutual information (in bits) between the two labelings from any format:

# Defaults to the reduced mutual information (RMI)
mutual_information = cmi.mutual_information(labels1, labels2)  # From lists
mutual_information = cmi.mutual_information(contingency_table)  # From contingency table
mutual_information = cmi.mutual_information(filename)  # Reads filename

print(f"Mutual Information: {mutual_information:.3f} (bits)")

# Compute other variants using the "variation" parameter.
# Correcting for chance (random permutations)
adjusted_mutual_information = cmi.mutual_information(labels1, labels2, variation="adjusted")  
# Traditional mutual information
traditional_mutual_information = cmi.mutual_information(labels1, labels2, variation="traditional")

The package can also compute the normalized mutual information (NMI) between the two labelings, a measure bounded above by 1 when the two labelings are identical. Depending on the application, a symmetric or asymmetric normalization may be appropriate.

# Symmetric normalization
normalized_mutual_information = cmi.normalized_mutual_information(labels1, labels2, normalization="mean")
# "Normalized Mutual Information" most commonly refers to the Stirling-approximated mutual information
# divided by the mean of the entropies of the two labelings, although this is not our preferred measure.
normalized_stirling_mutual_information = cmi.normalized_mutual_information(labels1, labels2, variation="stirling", normalization="mean")

print(f"(symmetric) Normalized Mutual Information (labels1 <-> labels2): {normalized_mutual_information:.3f}")

# Asymmetric normalization measures how much the first labeling tells us about the second,
# as a fraction of all there is to know about the second labeling.
# This form is appropriate when the second labeling is a "ground truth" and the first is a prediction.
asymmetric_normalized_mutual_information_1_2 = cmi.normalized_mutual_information(labels1, labels2, normalization="second")
# Or when the first labeling is the ground truth and the second is a prediction.
asymmetric_normalized_mutual_information_2_1 = cmi.normalized_mutual_information(labels1, labels2, normalization="first")

print(f"(asymmetric) Normalized Mutual Information (labels1 -> labels2): {asymmetric_normalized_mutual_information_1_2:.3f}")
print(f"(asymmetric) Normalized Mutual Information (labels2 -> labels1): {asymmetric_normalized_mutual_information_2_1:.3f}")

Further usage examples can be found in the examples directory of the repository and the package documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

clustering_mi-0.1.6.tar.gz (144.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

clustering_mi-0.1.6-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file clustering_mi-0.1.6.tar.gz.

File metadata

  • Download URL: clustering_mi-0.1.6.tar.gz
  • Upload date:
  • Size: 144.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for clustering_mi-0.1.6.tar.gz
Algorithm Hash digest
SHA256 43166e5ac9cd62229bdcb65612992a7e3b073fc2e8c243cefe23da397f91fce7
MD5 7aa628514615f101f9f04d3a73702f55
BLAKE2b-256 de8a98c0eaa8f86986ff2962c18befaed0df8c1a07c127ea7b3c38e7ce76bed9

See more details on using hashes here.

File details

Details for the file clustering_mi-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: clustering_mi-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for clustering_mi-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 76d10173597ee4f664380340b6119ab78a33ddc27b6005066c893650ec52cd51
MD5 65a298d5a07a88758196d97c69d2b56e
BLAKE2b-256 46c3d27aa468353f9e9b33295a35ec9ea01b3157a3a2af326418b3c046f14b8e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page