A collection of fast ranking evaluation metrics built with Numba
Project description
rank_eval
⚡️ Introduction
rank_eval is a collection of fast ranking evaluation metrics implemented in Python, taking advantage of Numba for high speed vector operations and automatic parallelization.
✨ Available Metrics
- Hits
- Precision
- Recall
- rPrecision
- Mean Reciprocal Rank (MRR)
- Mean Average Precision (MAP)
- Normalized Discounted Cumulative Gain (NDCG)
The metrics have been tested against TREC Eval for correctness — through a comparison with pytrec_eval.
The implemented metrics are up to 50 times faster than pytrec_eval and with a much lower memory footprint.
Please note that TREC Eval uses a non-standard NDCG implementation. To mimic its behaviour, pass trec_eval=True to rank_eval's ndcg function.
🔧 Requirements
- Python 3
- Numpy
- Numba
🔌 Installation
pip install rank_eval
💡 Usage
from rank_eval import ndcg
import numpy as np
# Note that y_true does not need to be ordered
# Integers are documents IDs, while floats are the true relevance scores
y_true = np.array([[[12, 0.5], [25, 0.3]], [[11, 0.4], [2, 0.6]]])
y_pred = np.array(
[
[[12, 0.9], [234, 0.8], [25, 0.7], [36, 0.6], [32, 0.5], [35, 0.4]],
[[12, 0.9], [11, 0.8], [25, 0.7], [36, 0.6], [2, 0.5], [35, 0.4]],
]
)
k = 5
ndcg(y_true, y_pred, k)
>>> 0.7525653965843032
rank_eval supports the usage of y_true elements of different lenght by using Numba Typed List. Simply convert your y_true list of arrays using the provided utility function:
from rank_eval import ndcg
from rank_eval.utils import to_typed_list
import numpy as np
y_true = [np.array([[12, 0.5], [25, 0.3]]), np.array([[11, 0.4], [2, 0.6], [12, 0.1]])]
y_true = to_typed_list(y_true)
y_pred = np.array(
[
[[12, 0.9], [234, 0.8], [25, 0.7], [36, 0.6], [32, 0.5], [35, 0.4]],
[[12, 0.9], [11, 0.8], [25, 0.7], [36, 0.6], [2, 0.5], [35, 0.4]],
]
)
k = 5
ndcg(y_true, y_pred, k)
>>> 0.786890544287473
📚 Documentation
Search the documentation for more details and examples.
🎓 Citation
If you end up using rank_eval to evaluate results for your sceintific publication, please consider citing it:
@misc{rankEval2021,
title = {Rank\_eval: Blazing Fast Ranking Evaluation Metrics in Python},
author = {Bassani, Elias},
year = {2021},
publisher = {GitHub},
howpublished = {\url{https://github.com/AmenRa/rank_eval}},
}
🎁 Feature Requests
If you want a metric to be added, please open a new issue.
🤘 Want to contribute?
If you want to contribute, please drop me an e-mail.
📄 License
rank_eval is an open-sourced software licensed under the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rank_eval-0.1.tar.gz.
File metadata
- Download URL: rank_eval-0.1.tar.gz
- Upload date:
- Size: 8.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.7.2 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f2eb8068c11f658d55504898d887e514c74f818f56ffc6322ca78a15501a9949
|
|
| MD5 |
29f45e1c4662e1937f0125912043418c
|
|
| BLAKE2b-256 |
f688dd48572faef5725ecf0c4e7eb1cdf40bb95dcd4123dbbe42b7be63443bd8
|
File details
Details for the file rank_eval-0.1.0-py3-none-any.whl.
File metadata
- Download URL: rank_eval-0.1.0-py3-none-any.whl
- Upload date:
- Size: 17.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.8.1 pkginfo/1.7.0 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
21834797655be244cdc69897b871314aaf22ca3f39fd01f31a861b749233d525
|
|
| MD5 |
638413c0f45c6453853af2c624f0eb4f
|
|
| BLAKE2b-256 |
ee83964d181028118d82be987744f40475398de764b4b76b920061bd4f49905a
|
File details
Details for the file rank_eval-0.1-py3-none-any.whl.
File metadata
- Download URL: rank_eval-0.1-py3-none-any.whl
- Upload date:
- Size: 7.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.7.2 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c282f6f76e074f7d0ee38b68e5fc1d1db4cbc733bfd2a79d801e487e0ee8aad8
|
|
| MD5 |
2b3f58dcc1419b9aa04b09fd2a1ade17
|
|
| BLAKE2b-256 |
bfc97e00c79da572cc0493eee92f93f8fa8e856b9f43a1fd52ee57dc71afb513
|