Semantic F1 Score for Multi-label Classification
Project description
Semantic F1 Score
Semantic F1 grants partial credit by matching predictions and gold labels in both directions.
Semantic F1 (semantic_f1_score) is a drop-in replacement for sklearn's conventional f1_score in subjective or fuzzy multi-label classification. It keeps the familiar precision-recall framing while using a domain similarity matrix to acknowledge when "wrong" labels are still semantically close. The package is the reference implementation accompanying the paper: Semantic F1 Scores: Fair Evaluation Under Fuzzy Class Boundaries.
Installation
pip install semantic-f1-score
The library depends only on pandas and scipy. For development extras, install with pip install semantic_f1_score[test] and run pytest.
Highlights
- Two-step semantic precision/recall penalizes both over-prediction and under-coverage, avoiding the forced matches that plague single-pass or Hungarian-style alignment metrics.
- When the similarity matrix is the identity, every variant (pointwise, samples, micro, macro) reduces exactly to the standard F1, so existing evaluation pipelines stay compatible.
- Operates on metric and non-metric label spaces, and even continuous embeddings, making it suitable for emotions, moral foundations, negotiation strategies, and other fuzzy domains.
- Empirically monotonic with error rate and magnitude, robust to partially misspecified similarity matrices, and better aligned with downstream outcomes such as donation success in negotiation datasets (see paper for details).
- Lightweight pandas-based API with helpers for pointwise inspection and scikit-learn compatible averaging schemes.
Quick Start
import pandas as pd
from semantic_f1_score import semantic_f1_score, pointwise_semantic_f1_score
labels = ["anger", "disgust", "joy"]
S = pd.DataFrame(
[
[1.0, 0.7, 0.1],
[0.7, 1.0, 0.2],
[0.1, 0.2, 1.0],
],
index=labels,
columns=labels,
)
# Multi-label examples
y_true = [["anger", "disgust"], ["joy"], ["disgust"]]
y_pred = [["anger"], ["joy"], ["anger"]]
# also supports one-hot encoding with the same order as the similarity matrix S
print("Semantic micro F1", semantic_f1_score(y_true, y_pred, S, average="micro"))
print("Semantic macro F1", semantic_f1_score(y_true, y_pred, S, average="macro"))
print("Semantic samples F1", semantic_f1_score(y_true, y_pred, S, average="samples"))
# Inspect a single example
components = pointwise_semantic_f1_score(
y_pred[0],
y_true[0],
S,
return_components=True,
)
print("Pointwise components", components)
By design, using an identity matrix will give you the exact same scores as scikit-learn's F1 implementations. One-hot encoded inputs are detected automatically, and you can supply numeric labels via a mapping callback.
Metric Variants
pointwise_semantic_f1_score- semantic precision/recall plus harmonic mean for a single example, optionally returning the matched pairs.semantic_f1_score(..., average="samples")- mean of pointwise scores across a batch (default behaviour).semantic_f1_score(..., average="micro"|"macro"|"weighted"|None)- scikit-learn style aggregations that treat partial credit as soft counts.semantic_f1_score(..., average=None)- per-class semantic F1 values, ordered by the similarity matrix labels.extended_hungarian_match/hungarian_score- reproduces the Hungarian-style baseline analysed in the paper for comparison purposes.
Crafting a Similarity Matrix
Semantic F1 only assumes a symmetric square matrix with values in [0, 1]. In practice you can:
- Derive similarities from theory-driven structures (e.g. Plutchik's wheel of emotions, moral foundation clusters).
- Estimate them from data, such as normalized label co-occurrence or correlation matrices.
- Project labels into shared embeddings (e.g. sentence-level or concept-level encoders) and convert distances to similarities.
- Start with the identity matrix when no partial credit is desired, scores remain exact F1 while the API stays consistent.
Section B of the paper discusses best practices, including keeping on-diagonal values at 1, capping cross-cluster credit in non-metric spaces, and stress-testing metrics against perturbed matrices.
Development
# Clone and install in editable mode
pip install -e .[dev,test]
# Run the regression tests
pytest -q
Pull requests and issues are welcome on GitHub.
Citation
If you found this work useful or if you are using metric, you can use the following citation:
@article{chochlakis2025semanticf1score,
title={Semantic F1 Scores: Fair Evaluation Under Fuzzy Class Boundaries},
author={Georgios Chochlakis and Jackson Trager and Vedant Jhaveri and Nikhil Ravichandran and Alexandros Potamianos and Shrikanth Narayanan},
year={2025},
eprint={2509.21633},
journal = {arXiv preprint arXiv:2509.21633},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2509.21633},
archiveprefix = {arXiv}
}
License
Released under the MIT License. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file semantic_f1_score-1.0.2.tar.gz.
File metadata
- Download URL: semantic_f1_score-1.0.2.tar.gz
- Upload date:
- Size: 15.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68346e11ef67eaefa9285c9d44447721174078065cc1e9a2ca94125247884137
|
|
| MD5 |
9f4d7019f5b6d9b9a1b64b4009718346
|
|
| BLAKE2b-256 |
4abc676ea2998652aff7024790f659aa761b953d0dbf664026cbec735b7d1da2
|
File details
Details for the file semantic_f1_score-1.0.2-py3-none-any.whl.
File metadata
- Download URL: semantic_f1_score-1.0.2-py3-none-any.whl
- Upload date:
- Size: 11.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
717ea5c0d3f610db9e14a2cf1cefc6d0cfa4e069c54ed2e281b90fcfb3304707
|
|
| MD5 |
a9bbdac0abf205c235a9c1181bac98cb
|
|
| BLAKE2b-256 |
1ee38fac34c115375bd6c9eef53504ed9c0dd8e6e8846f829c719e75d75ec0e0
|