Skip to main content

ranx: A Blazing-Fast Python Library for Ranking Evaluation, Comparison, and Fusion

Project description

PyPI version Download counter Documentation Status License: MIT Open in Colab

⚡️ Introduction

ranx ([raŋks]) is a library of fast ranking evaluation metrics implemented in Python, leveraging Numba for high-speed vector operations and automatic parallelization. It offers a user-friendly interface to evaluate and compare Information Retrieval and Recommender Systems. ranx allows you to perform statistical tests and export LaTeX tables for your scientific publications. Moreover, ranx provides several fusion algorithms and normalization strategies, and an automatic fusion optimization functionality. ranx also have a companion repository of pre-computed runs to facilitated model comparisons called ranxhub. On ranxhub, you can download and share pre-computed runs for Information Retrieval datasets, such as MSMARCO Passage Ranking. ranx was featured in ECIR 2022, CIKM 2022, and SIGIR 2023.

If you use ranx to evaluate results or conducting experiments involving fusion for your scientific publication, please consider citing it: evaluation bibtex, fusion bibtex, ranxhub bibtex.

NB: ranx is not suited for evaluating classifiers. Please, refer to the FAQ for further details.

For a quick overview, follow the Usage section.

For a in-depth overview, follow the Examples section.

✨ Features

Metrics

The metrics have been tested against TREC Eval for correctness.

Statistical Tests

Please, refer to Smucker et al., Carterette, and Fuhr for additional information on statistical tests for Information Retrieval.

Off-the-shelf Qrels

You can load qrels from ir-datasets as simply as:

qrels = Qrels.from_ir_datasets("msmarco-document/dev")

A full list of the available qrels is provided here.

Off-the-shelf Runs

You can load runs from ranxhub as simply as:

run = Run.from_ranxhub("run-id")

A full list of the available runs is provided here.

Fusion Algorithms

Name Name Name Name Name
CombMIN CombMNZ RRF MAPFuse BordaFuse
CombMED CombGMNZ RBC PosFuse Weighted BordaFuse
CombANZ ISR WMNZ ProbFuse Condorcet
CombMAX Log_ISR Mixed SegFuse Weighted Condorcet
CombSUM LogN_ISR BayesFuse SlideFuse Weighted Sum

Please, refer to the documentation for further details.

Normalization Strategies

Please, refer to the documentation for further details.

🔌 Requirements

python>=3.8

As of v.0.3.5, ranx requires python>=3.8.

💾 Installation

pip install ranx

💡 Usage

Create Qrels and Run

from ranx import Qrels, Run

qrels_dict = { "q_1": { "d_12": 5, "d_25": 3 },
               "q_2": { "d_11": 6, "d_22": 1 } }

run_dict = { "q_1": { "d_12": 0.9, "d_23": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_32": 0.5, "d_35": 0.4  },
             "q_2": { "d_12": 0.9, "d_11": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_22": 0.5, "d_35": 0.4  } }

qrels = Qrels(qrels_dict)
run = Run(run_dict)

Evaluate

from ranx import evaluate

# Compute score for a single metric
evaluate(qrels, run, "ndcg@5")
>>> 0.7861

# Compute scores for multiple metrics at once
evaluate(qrels, run, ["map@5", "mrr"])
>>> {"map@5": 0.6416, "mrr": 0.75}

Compare

from ranx import compare

# Compare different runs and perform Two-sided Paired Student's t-Test
report = compare(
    qrels=qrels,
    runs=[run_1, run_2, run_3, run_4, run_5],
    metrics=["map@100", "mrr@100", "ndcg@10"],
    max_p=0.01  # P-value threshold
)

Output:

print(report)
#    Model    MAP@100    MRR@100    NDCG@10
---  -------  --------   --------   ---------
a    model_1  0.320ᵇ     0.320ᵇ     0.368ᵇᶜ
b    model_2  0.233      0.234      0.239
c    model_3  0.308ᵇ     0.309ᵇ     0.330ᵇ
d    model_4  0.366ᵃᵇᶜ   0.367ᵃᵇᶜ   0.408ᵃᵇᶜ
e    model_5  0.405ᵃᵇᶜᵈ  0.406ᵃᵇᶜᵈ  0.451ᵃᵇᶜᵈ

Fusion

from ranx import fuse, optimize_fusion

best_params = optimize_fusion(
    qrels=train_qrels,
    runs=[train_run_1, train_run_2, train_run_3],
    norm="min-max",     # The norm. to apply before fusion
    method="wsum",      # The fusion algorithm to use (Weighted Sum)
    metric="ndcg@100",  # The metric to maximize
)

combined_test_run = fuse(
    runs=[test_run_1, test_run_2, test_run_3],  
    norm="min-max",       
    method="wsum",        
    params=best_params,
)

📖 Examples

Name Link
Overview Open In Colab
Qrels and Run Open In Colab
Evaluation Open In Colab
Comparison and Report Open In Colab
Fusion Open In Colab
Plot Open In Colab
Share your runs with ranxhub Open In Colab

📚 Documentation

Browse the documentation for more details and examples.

🎓 Citation

If you use ranx to evaluate results for your scientific publication, please consider citing our ECIR 2022 paper:

BibTeX
@inproceedings{ranx,
  author       = {Elias Bassani},
  title        = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison},
  booktitle    = {{ECIR} {(2)}},
  series       = {Lecture Notes in Computer Science},
  volume       = {13186},
  pages        = {259--264},
  publisher    = {Springer},
  year         = {2022},
  doi          = {10.1007/978-3-030-99739-7\_30}
}

If you use the fusion functionalities provided by ranx for conducting the experiments of your scientific publication, please consider citing our CIKM 2022 paper:

BibTeX
@inproceedings{ranx.fuse,
  author    = {Elias Bassani and
              Luca Romelli},
  title     = {ranx.fuse: {A} Python Library for Metasearch},
  booktitle = {{CIKM}},
  pages     = {4808--4812},
  publisher = {{ACM}},
  year      = {2022},
  doi       = {10.1145/3511808.3557207}
}

If you use pre-computed runs from ranxhub to make comparison for your scientific publication, please consider citing our SIGIR 2023 paper:

BibTeX
@inproceedings{ranxhub,
  author       = {Elias Bassani},
  title        = {ranxhub: An Online Repository for Information Retrieval Runs},
  booktitle    = {{SIGIR}},
  pages        = {3210--3214},
  publisher    = {{ACM}},
  year         = {2023},
  doi          = {10.1145/3539618.3591823}
}

🎁 Feature Requests

Would you like to see other features implemented? Please, open a feature request.

🤘 Want to contribute?

Would you like to contribute? Please, drop me an e-mail.

📄 License

ranx is an open-sourced software licensed under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ranx-0.3.20.tar.gz (51.5 kB view details)

Uploaded Source

Built Distribution

ranx-0.3.20-py3-none-any.whl (99.3 kB view details)

Uploaded Python 3

File details

Details for the file ranx-0.3.20.tar.gz.

File metadata

  • Download URL: ranx-0.3.20.tar.gz
  • Upload date:
  • Size: 51.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for ranx-0.3.20.tar.gz
Algorithm Hash digest
SHA256 8afc6f2042c40645e5d1fd80c35ed75a885e18bd2db7e95cc7ec32a0b41e59ea
MD5 e7403e7ad07211837e7590520de9710d
BLAKE2b-256 35fe4d4e7c69137afdeb5a4a85afcf04b84f087a284b7f22034e2e13e121de83

See more details on using hashes here.

File details

Details for the file ranx-0.3.20-py3-none-any.whl.

File metadata

  • Download URL: ranx-0.3.20-py3-none-any.whl
  • Upload date:
  • Size: 99.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for ranx-0.3.20-py3-none-any.whl
Algorithm Hash digest
SHA256 e056e4d5981b0328b045868cc7064fc57a545f36009fbe9bb602295ec33335de
MD5 98f25118207f1580af5b72c8253a9019
BLAKE2b-256 1e3053f41b7b728a48da8974075f56c57200d7b11e4e9fa93be3cabf8218dc0c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page