Skip to main content

A fast implementation of ranking metrics for information retrieval and recommendation.

Project description

Latest PyPI version Latest GitHub actions build status

A fast numpy/numba-based implementation of ranking metrics for information retrieval and recommendation. Coded with efficiency in mind and support for edge cases.

Find the full documentation here.


  • Wide array of evaluation metrics for information retrieval and top-N recommender systems:

    • Binary labels: Recall, Precision, MAP, HitRate, MRR, MeanRanks, F1

    • Numeric and binary labels: DCG, nDCG

  • Minimal dependencies: Numpy and Numba (required), SciPy (optional)

  • Flexible input formats: Supports arrays, lists and sparse matrices

  • Built-in support for confidence intervals via bootstrapping


from rankereval import BinaryLabels, Rankings, Recall

y_true = BinaryLabels.from_positive_indices([[0,2], [0,1,2]])
y_pred = Rankings.from_ranked_indices([[2,1], [1]])

recall_at_3 = Recall(3).mean(y_true, y_pred)

To get confidence intervals (95% by default), specify conf_interval=True:

recall_at_3 = Recall(3).mean(y_true, y_pred, conf_interval=True)

Input formats

RankerEval allows for a variety of input formats, e.g.,

# specify all labels as lists
y_true = BinaryLabels.from_dense([[1,0,1], [1,1,1]])

# specify labels as numpy array
y_true = BinaryLabels.from_dense(np.asarray([[1,0,1], [1,1,1]]))

# or use a sparse matrix
import scipy.sparse as sp
y_true = BinaryLabels.from_sparse(sp.coo_matrix([[1,0,1], [1,1,1]]))


To install (requires Numpy 1.18 or newer):

pip install rankereval


This project is licensed under MIT.


RankerEval was written by Tobias Schnabel.


This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact <> with any additional questions or comments.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rankerEval-0.2.0.tar.gz (13.0 kB view hashes)

Uploaded source

Built Distribution

rankerEval-0.2.0-py3-none-any.whl (14.3 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page