Skip to main content

A Python package for easily calculating information retrieval (IR) accuracy metrics using Elasticsearch and datasets.

Project description

Elasticsearch IR Evaluator

PyPI - Version

Overview

elasticsearch-ir-evaluator is a Python package designed for easily calculating a range of information retrieval (IR) accuracy metrics using Elasticsearch and datasets. This tool is ideal for users who need to assess the effectiveness of search queries in Elasticsearch. It supports the following key IR metrics:

  • Precision
  • Recall
  • Mean Reciprocal Rank (MRR)
  • Mean Average Precision (MAP)
  • Cumulative Gain (CG)
  • Normalized Discounted Cumulative Gain (nDCG)
  • False Positive Rate (FPR)
  • Binary Preference (BPref)

These metrics provide a comprehensive assessment of search performance, catering to various aspects of IR system evaluation. The tool's flexibility allows users to select specific metrics according to their evaluation needs.

Installation

To install elasticsearch-ir-evaluator, use pip:

pip install elasticsearch-ir-evaluator

Prerequisites

  • Elasticsearch version 8.11 or higher running on your system.
  • Python 3.8 or higher.

Complete Usage Process

The following steps will guide you through using elasticsearch-ir-evaluator to calculate search accuracy metrics. For more detailed and practical examples, please refer to the examples directory in this repository.

Step 1: Set Up Elasticsearch Client

Configure your Elasticsearch client with the appropriate credentials:

from elasticsearch import Elasticsearch

es_client = Elasticsearch(
    hosts="https://your-elasticsearch-host",
    basic_auth=("your-username", "your-password"),
    verify_certs=True,
    ssl_show_warn=True,
)

Step 2: Create and Index the Corpus

Create and index a new corpus. You can customize index settings and text field configurations, including analyzers:

from elasticsearch_ir_evaluator import ElasticsearchIrEvaluator, Document

# Initialize the ElasticsearchIrEvaluator
evaluator = ElasticsearchIrEvaluator(es_client)

# Specify your documents
documents = [
    Document(id="doc1", title="Title 1", text="Text of document 1"),
    Document(id="doc2", title="Title 2", text="Text of document 2"),
    # ... more documents
]

# Set custom index text field configurations
text_field_config = {"analyzer": "standard"}

evaluator.set_text_field_config(text_field_config)

# Create a new index or set an existing one
evaluator.set_index_name("your_index_name")

# Index documents with an optional ingest pipeline
evaluator.index(documents, pipeline="your_optional_pipeline")

Step 3: Set a Custom Search Template

Customize the search query template for Elasticsearch. Use {{question}} for the question text and {{vector}} for the vector value in QandA:

search_template = {
    "query": {
        "multi_match": {
            "query": "{{question}}",
            "fields": ["title", "text"],
        }
    },
    "knn": [
        {
            "field": "vector",
            "query_vector": "{{vector}}",
            "k": 5,
            "num_candidates": 100,
        }
    ],
}

evaluator.set_search_template(search_template)

Step 4: Calculate Accuracy Metrics

Use .calculate() to compute all possible metrics based on the structure of the provided dataset:

# Load QA pairs for evaluation
qa_pairs = [
    QandA(question="What is Elasticsearch?", answers=["doc1"]),
    # ... more QA pairs
]

# Calculate all metrics
results = evaluator.calculate(qa_pairs)

# Output results
print(result.to_markdown())

This step involves a comprehensive evaluation of search performance using the provided question-answer pairs. The .calculate() method computes all metrics that can be derived from the dataset's structure.

Progress Logging

elasticsearch-ir-evaluator supports progress logging to ensure that long-running indexing tasks can be safely interrupted and resumed. This feature is particularly useful for indexing large datasets or conducting extensive search evaluations, where the process might take an extended period.

Log File

When initiating indexing or evaluation processes, the tool automatically generates a log file named elasticsearch-ir-evaluator-log.json in the current working directory. This log file contains vital information about the progress, including:

  • last_processed_id: The ID of the last document that was successfully indexed or queried. This ensures that the process can resume from the exact point it was interrupted.
  • processed_count: The total number of documents that have been processed so far, providing a quick insight into the progress.
  • index_name: The name of the Elasticsearch index being used, allowing the process to resume with the correct index context.
  • last_checkpoint_timestamp: A timestamp marking the last update to the log file, offering a reference to when the process was last active.

Resuming Operations

Upon restart, elasticsearch-ir-evaluator automatically detects the presence of the elasticsearch-ir-evaluator-log.json file and uses the information within to resume operations from where they were left off. This mechanism ensures that no duplicate processing occurs and that every document is accounted for, streamlining the continuation of previously interrupted tasks.

Ensuring Data Integrity

This logging feature is designed with data integrity in mind. By recording the progress and using this data to resume operations, elasticsearch-ir-evaluator minimizes the risk of incomplete evaluations or indexing, ensuring that the accuracy of IR metrics and the completeness of indexed datasets are maintained.

License

elasticsearch-ir-evaluator is available under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

elasticsearch_ir_evaluator-0.4.1.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

elasticsearch_ir_evaluator-0.4.1-py3-none-any.whl (16.6 kB view details)

Uploaded Python 3

File details

Details for the file elasticsearch_ir_evaluator-0.4.1.tar.gz.

File metadata

File hashes

Hashes for elasticsearch_ir_evaluator-0.4.1.tar.gz
Algorithm Hash digest
SHA256 0cfe0cfaf1da696d9db6a2ed65c7c5193c62d2f9b131966c5fd85769347ede4d
MD5 679a4237247d0cf4fd0ec1225d29513e
BLAKE2b-256 1b93a2032f55b6e5e2e0c99d96a941d9ab4157bdcd7c20d657135833771f2635

See more details on using hashes here.

File details

Details for the file elasticsearch_ir_evaluator-0.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for elasticsearch_ir_evaluator-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 29371f2b2890c28930706564b8f9cb34dc33a9d0741e39fa28098dfef0a09789
MD5 2aaa362f41e37c1b96234d6d5036e6f2
BLAKE2b-256 57001af14088a6603cd72285a7b81f87e89a8c4093b247d22f5c60e79e1e2442

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page