Skip to main content

ranx: A Blazing Fast Python Library for Ranking Evaluation and Comparison

Project description

PyPI version Documentation Status License: MIT Open In Colab

🔥 News

  • [June 9, 2022] Added support for 24 fusion algorithms, six normalization strategies, and an automatic fusion optimization functionality in v.0.2.
    Check out the official documentation for further details on fusion and normalization.
  • [May 18, 2022] Added support for loading qrels from ir-datasets in v.0.1.13.
    Usage example: Qrels.from_ir_datasets("msmarco-document/dev") for MS MARCO document retrieval dev set.
  • [May 4, 2022] Added Paired Student's t-Test in v.0.1.12.

⚡️ Introduction

ranx is a library of fast ranking evaluation metrics implemented in Python, leveraging Numba for high-speed vector operations and automatic parallelization. It offers a user-friendly interface to evaluate and compare Information Retrieval and Recommender Systems. ranx allows you to perform statistical tests and export LaTeX tables for your scientific publications. Moreover, ranx provides several fusion algorithms and normalization strategies, and an automatic fusion optimization functionality. ranx was featured in ECIR 2022, the 44th European Conference on Information Retrieval.

If you use ranx to evaluate results or conducting experiments involving fusion for your scientific publication, please consider citing it.

For a quick overview, follow the Usage section.

For a in-depth overview, follow the Examples section.

✨ Features

Metrics

The metrics have been tested against TREC Eval for correctness.

Statistical Tests

Please, refer to Smucker et al. for additional information on statistical tests for Information Retrieval.

Off-the-shelf Qrels

You can load qrels from ir-datasets as simply as:

qrels = Qrels.from_ir_datasets("msmarco-document/dev")

A full list of the available qrels is provided here.

Fusion Algorithms

Name Name Name Name
CombMIN CombGMNZ PosFuse BordaFuse
CombMED ISR ProbFuse Weighted BordaFuse
CombANZ Log_ISR SegFuse Condorcet
CombMAX LogN_ISR SlideFuse Weighted Condorcet
CombSUM RRF MAPFuse Mixed
CombMNZ WMNZ BayesFuse Wighted Sum

Please, refer to the documentation for further details.

Normalization Strategies

Please, refer to the documentation for further details.

🔌 Installation

pip install ranx

💡 Usage

Create Qrels and Run

from ranx import Qrels, Run

qrels_dict = { "q_1": { "d_12": 5, "d_25": 3 },
               "q_2": { "d_11": 6, "d_22": 1 } }

run_dict = { "q_1": { "d_12": 0.9, "d_23": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_32": 0.5, "d_35": 0.4  },
             "q_2": { "d_12": 0.9, "d_11": 0.8, "d_25": 0.7,
                      "d_36": 0.6, "d_22": 0.5, "d_35": 0.4  } }

qrels = Qrels(qrels_dict)
run = Run(run_dict)

Evaluate

from ranx import evaluate

# Compute score for a single metric
evaluate(qrels, run, "ndcg@5")
>>> 0.7861

# Compute scores for multiple metrics at once
evaluate(qrels, run, ["map@5", "mrr"])
>>> {"map@5": 0.6416, "mrr": 0.75}

Compare

from ranx import compare

# Compare different runs and perform statistical tests
report = compare(
    qrels=qrels,
    runs=[run_1, run_2, run_3, run_4, run_5],
    metrics=["map@100", "mrr@100", "ndcg@10"],
    max_p=0.01  # P-value threshold
)

Output:

print(report)
#    Model    MAP@100    MRR@100    NDCG@10
---  -------  --------   --------   ---------
a    model_1  0.320ᵇ     0.320ᵇ     0.368ᵇᶜ
b    model_2  0.233      0.234      0.239
c    model_3  0.308ᵇ     0.309ᵇ     0.330ᵇ
d    model_4  0.366ᵃᵇᶜ   0.367ᵃᵇᶜ   0.408ᵃᵇᶜ
e    model_5  0.405ᵃᵇᶜᵈ  0.406ᵃᵇᶜᵈ  0.451ᵃᵇᶜᵈ

Fusion

from ranx import fuse, optimize_fusion

best_params = optimize_fusion(
    qrels=train_qrels,
    runs=[train_run_1, train_run_2, train_run_3],
    norm="min-max",     # The norm. to apply before fusion
    method="wsum",      # The fusion algorithm to use (Weighted Sum)
    metric="ndcg@100",  # The metric to maximize
)

combined_test_run = fuse(
    runs=[test_run_1, test_run_2, test_run_3],  
    norm="min-max",       
    method="wsum",        
    params=best_params,
)

📖 Examples

Name Link
Overview Open In Colab
Qrels and Run Open In Colab
Evaluation Open In Colab
Comparison and Report Open In Colab
Fusion Open In Colab

📚 Documentation

Browse the documentation for more details and examples.

🎓 Citation

If you use ranx to evaluate results for your scientific publication, please consider citing it:

@inproceedings{bassani2022ranx,
  author    = {Elias Bassani},
  title     = {ranx: {A} Blazing-Fast Python Library for Ranking Evaluation and Comparison},
  booktitle = {{ECIR} {(2)}},
  series    = {Lecture Notes in Computer Science},
  volume    = {13186},
  pages     = {259--264},
  publisher = {Springer},
  year      = {2022}
}

🎁 Feature Requests

Would you like to see other features implemented? Please, open a feature request.

🤘 Want to contribute?

Would you like to contribute? Please, drop me an e-mail.

📄 License

ranx is an open-sourced software licensed under the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ranx-0.2.3.tar.gz (40.0 kB view hashes)

Uploaded Source

Built Distribution

ranx-0.2.3-py3-none-any.whl (84.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page