Skip to main content

Information retrieval evaluation metrics in pure python with zero dependencies

Project description

Information Retrieval Evaluation

image Actions status License: MIT

This project provides simple and tested pure python implementations of popular information retrieval metrics without any library dependencies (not even numpy!). The source code is clear and easy to understand. All functions have pydoc help strings.

The metrics can be used to determine the quality of rankings that are returned by a retrieval or recommender system.

Installation

Requires: Python >=3.11

ir_evaluation can be installed from pypi with:

pip install ir_evaluation

Usage

Metric functions will generally accept the following arguments:

actual (list[int]): An array of ground truth relevant items.

predicted (list[int]): An array of predicted items, ordered by relevance.

k (int): The number of top predictions to consider.

Functions will return a float value as the computed metric value.

Unit tests

Unit tests with easy to follow scenarios and sample data are included.

Run unit tests

uv run pytest

Metrics

Recall

Recall is defined as the ratio of the total number of relevant items retrieved within the top-k predictions to the total number of relevant items in the entire database.

Usage scenario: Prioritize returning all relevant items from database. Early retrieval stages where many candidates are returned should focus on this metric.

from ir_evaluation.metrics import recall

Precision

Precision is defined as the ratio of the total number of relevant items retrieved within the top-k predictions to the total number of returned items (k).

Usage scenario: Minimize false positives in predictions. Later ranking stages should focus on this metric.

from ir_evaluation.metrics import precision

F1 Score

The F1-score is calculated as the harmonic mean of precision and recall. The F1-score provides a balanced view of a system's performance by taking into account both precision and recall.

Usage scenario: Use when where finding all relevant documents is just as important as minimizing irrelevant ones (eg in information retrieval).

from ir_evaluation.metrics import f1_score

Average Precision (AP)

Average Precision is calculated as the mean of precision values at each rank where a relevant item is retrieved within the top k predictions.

Usage scenario: Evaluates how well relevant items are ranked within the top-k returned list.

from ir_evaluation.metrics import average_precision

Mean Average Precision (MAP)

MAP is the mean of the Average Precision (AP - see above) scores computed for multiple queries.

Usage scenario: Reflects overall performance of AP for multiple queries. A good holistic metric that balances the tradeoff between recall and precision.

from ir_evaluation.metrics import mean_average_precision

Normalized Discounted Cumulative Gain (nDCG)

nDCG evaluates the quality of a predicted ranking by comparing it to an ideal ranking (i.e., perfect ordering of relevant items). It accounts for the position of relevant items in the ranking, giving higher weight to items appearing earlier.

Usage scenario: Prioritize returning relevant items higher in the returned top-k list. A good holistic metric.

from ir_evaluation.metrics import ndcg

Reciprocal Rank (RR)

Reciprocal Rank (RR) assigns a score based on the reciprocal of the rank at which the first relevant item is found.

Usage scenario: Useful when the topmost recommendation holds siginificant value. Use this when users are presented with one or very few returned results.

from ir_evaluation.metrics import reciprocal_rank

Mean Reciprocal Rank (MRR)

MRR calculates the mean of the Reciprocal Rank (RR) scores for a set of queries.

Usage scenario: Reflects overall performance of RR for multiple queries.

from ir_evaluation.metrics import mean_reciprocal_rank

Online Resources

Pinecone - Evaluation Measures in Information Retrieval

Spot Intelligence - Mean Average Precision

Spot Intelligence - Mean Reciprocal Rank

google-research/ials

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ir_evaluation-1.1.0.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ir_evaluation-1.1.0-py3-none-any.whl (6.5 kB view details)

Uploaded Python 3

File details

Details for the file ir_evaluation-1.1.0.tar.gz.

File metadata

  • Download URL: ir_evaluation-1.1.0.tar.gz
  • Upload date:
  • Size: 8.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.5.15

File hashes

Hashes for ir_evaluation-1.1.0.tar.gz
Algorithm Hash digest
SHA256 63c1e3c32782d5c34ab73bbf15bb948a0c81b863986abbcd32e1d93ed90f9662
MD5 abd210f0b4bbb82c502aedf1579c3e40
BLAKE2b-256 96f458a3c8bc1f2a44acfbc2bf34e8e941d4950aa94d6f590c273897db61ec77

See more details on using hashes here.

File details

Details for the file ir_evaluation-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for ir_evaluation-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bdb351ab74c65abb933ab7b7e80795e888252de7f3e305b1b67c5739b17c778b
MD5 57ac0fb2330832f6f5c13ee7b3b56105
BLAKE2b-256 914a65bcd9811330de4fe8d4cb066cb60eac50c975d4ea9b0c41a036400bdb35

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page