A collection of common recommendation system metrics
Project description
Recommender metrics
This is a collection of commonly used recommendation system (RS) metrics. As fairness in RS is becoming increasingly important, it is also extended by functions to ease computing the differences of RS performances for different user groups, e.g., gender.
The following metrics are supported (all with the cut-off threshold k):
- DCG
- nDCG
- Precision
- Recall
- F-score
- Average Precision (AP)*
- Reciprocal Rank (RR)*
- Hitrate
- Coverage
This library focuses on efficient metric implementations for PyTorch tensors, NumPy arrays and sparse arrays.
Notes:
* Averaging average precision and reciprocal rank of multiple samples
leads to mean average precision (MAP) and mean reciprocal rank (MRR), respectively,
which are often used in research.
Installation
-
Install via pip:
python -m pip install rmet -
Or from source:
python -m pip install .
Usage metrics
To compute the metrics, simply call them with your model's output, the
true (known) interactions and some cut-off value k:
from rmet import ndcg
ndcg(model_output, targets, k=10)
Note: Coverage does not require the targets attribute.
To compute multiple metrics with a single call, check out the calculate function,
which accepts a list of metrics to compute:
from rmet import calculate
calculate(
metrics=["ndcg", "recall"],
logits=model_output,
targets=targets,
k=10,
return_individual=False,
flatten_results=True,
)
Sample output:
{
'ndcg@10': 0.479,
'recall@10': 0.350
}
If return_individual is set, the metrics are also returned on sample level, e.g., for every user, when possible.
Further, calculate allows the efficient computation of metrics for multiple cutoff thresholds k, by simply providing a list of numbers instead.
Please check out the functions docstring for the full feature description.
Usage metric differences for user features
One can also instantiate the UserFeature class for some demographic user feature,
such that the performance difference of RS on for different users can be
evaluated, e.g., for male and female users in the context of gender.
To do so, you first need to specify which feature belongs to which user via the
UserGroup class and then simply call calculate_for_group similar to calculate above.
from rmet import UserFeature, calculate_for_feature
ug_gender = UserFeature("gender", ["m", "m", "f", "d", "m"])
calculate_for_feature(
ug_gender,
metrics=["ndcg", "recall"],
logits=model_output,
targets=targets,
k=10,
return_individual=False,
flatten_results=True,
)
Sample output:
{
'gender_f': {'ndcg@10': 0.195, 'recall@10': 0.125},
'gender_m': {'ndcg@10': 0.779, 'recall@10': 0.733},
'gender_d': {'ndcg@10': 0.390, 'recall@10': 0.458},
'gender_f-m': {'ndcg@10': -0.584, 'recall@10': -0.608},
'gender_f-d': {'ndcg@10': -0.195, 'recall@10': -0.333},
'gender_m-d': {'ndcg@10': 0.388, 'recall@10': 0.275}
}
License
MIT License - see the LICENSE file for more details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rmet-0.1.5.tar.gz.
File metadata
- Download URL: rmet-0.1.5.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b19dabba13e2fd8edadde8fa01624bddbfe08869d2f24b79906aeb9599347624
|
|
| MD5 |
d8619088758d65be56d46ac97a0c8810
|
|
| BLAKE2b-256 |
1770d7b9d979d98cc4c66ea628de1bd8838df2e5d0680792ba02b3c651c69434
|
Provenance
The following attestation bundles were made for rmet-0.1.5.tar.gz:
Publisher:
publish-to-pypi.yaml on Tigxy/recommender-metrics
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
rmet-0.1.5.tar.gz -
Subject digest:
b19dabba13e2fd8edadde8fa01624bddbfe08869d2f24b79906aeb9599347624 - Sigstore transparency entry: 253318890
- Sigstore integration time:
-
Permalink:
Tigxy/recommender-metrics@a794a4d6541083ef73a43aa67c6a4f760efd7392 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Tigxy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-to-pypi.yaml@a794a4d6541083ef73a43aa67c6a4f760efd7392 -
Trigger Event:
push
-
Statement type:
File details
Details for the file rmet-0.1.5-py3-none-any.whl.
File metadata
- Download URL: rmet-0.1.5-py3-none-any.whl
- Upload date:
- Size: 11.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2db18a33ae0f61d16fd358c6b8e6adf267fb29f504a8cc70584b7813beb0a241
|
|
| MD5 |
3930978bd70c42d1ce2c7b3f11f00cbd
|
|
| BLAKE2b-256 |
4267a18fdee53dd5fa8da861ccc3d9a7bdecd380f95750cb55965e81f5fa0c61
|
Provenance
The following attestation bundles were made for rmet-0.1.5-py3-none-any.whl:
Publisher:
publish-to-pypi.yaml on Tigxy/recommender-metrics
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
rmet-0.1.5-py3-none-any.whl -
Subject digest:
2db18a33ae0f61d16fd358c6b8e6adf267fb29f504a8cc70584b7813beb0a241 - Sigstore transparency entry: 253318891
- Sigstore integration time:
-
Permalink:
Tigxy/recommender-metrics@a794a4d6541083ef73a43aa67c6a4f760efd7392 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/Tigxy
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-to-pypi.yaml@a794a4d6541083ef73a43aa67c6a4f760efd7392 -
Trigger Event:
push
-
Statement type: