A Python package for easily calculating information retrieval (IR) accuracy metrics using Elasticsearch and datasets.
Project description
Elasticsearch IR Evaluator
Overview
elasticsearch-ir-evaluator
is a Python package designed for easily calculating a range of information retrieval (IR) accuracy metrics using Elasticsearch and datasets. This tool is ideal for users who need to assess the effectiveness of search queries in Elasticsearch. It supports the following key IR metrics:
- Precision
- Recall
- Mean Reciprocal Rank (MRR)
- Mean Average Precision (MAP)
- Cumulative Gain (CG)
- Normalized Discounted Cumulative Gain (nDCG)
- False Positive Rate (FPR)
- Binary Preference (BPref)
These metrics provide a comprehensive assessment of search performance, catering to various aspects of IR system evaluation. The tool's flexibility allows users to select specific metrics according to their evaluation needs.
Installation
To install elasticsearch-ir-evaluator
, use pip:
pip install elasticsearch-ir-evaluator
Prerequisites
- Elasticsearch version 8.11 or higher running on your system.
- Python 3.8 or higher.
Complete Usage Process
The following steps will guide you through using elasticsearch-ir-evaluator
to calculate search accuracy metrics.
For more detailed and practical examples, please refer to the examples directory in this repository.
Step 1: Set Up Elasticsearch Client
Configure your Elasticsearch client with the appropriate credentials:
from elasticsearch import Elasticsearch
es_client = Elasticsearch(
hosts="https://your-elasticsearch-host",
basic_auth=("your-username", "your-password"),
verify_certs=True,
ssl_show_warn=True,
)
Step 2: Create and Index the Corpus
Create and index a new corpus. You can customize index settings and text field configurations, including analyzers:
from elasticsearch_ir_evaluator import ElasticsearchIrEvaluator, Document
# Initialize the ElasticsearchIrEvaluator
evaluator = ElasticsearchIrEvaluator(es_client)
# Specify your documents
documents = [
Document(id="doc1", title="Title 1", text="Text of document 1"),
Document(id="doc2", title="Title 2", text="Text of document 2"),
# ... more documents
]
# Set custom index settings and text field configurations
index_settings = {"number_of_shards": 1, "number_of_replicas": 0}
text_field_config = {"analyzer": "standard"}
evaluator.set_index_settings(index_settings)
evaluator.set_text_field_config(text_field_config)
# Create a new index or set an existing one
evaluator.set_index_name("your_index_name")
# Index documents with an optional ingest pipeline
evaluator.index(documents, pipeline="your_optional_pipeline")
Step 3: Set a Custom Search Template
Customize the search query template for Elasticsearch. Use {{question}}
for the question text and {{vector}}
for the vector value in QandA:
search_template = {
"match": {
"text": "{{question}}"
}
}
evaluator.set_search_template(search_template)
Step 4: Calculate Accuracy Metrics
Use .calculate()
to compute all possible metrics based on the structure of the provided dataset:
# Load QA pairs for evaluation
qa_pairs = [
QandA(question="What is Elasticsearch?", answers=["doc1"]),
# ... more QA pairs
]
# Calculate all metrics
results = evaluator.calculate(qa_pairs)
# Output results
print(result.model_dump_json(indent=4))
This step involves a comprehensive evaluation of search performance using the provided question-answer pairs. The .calculate()
method computes all metrics that can be derived from the dataset's structure.
License
elasticsearch-ir-evaluator
is available under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file elasticsearch_ir_evaluator-0.3.0.tar.gz
.
File metadata
- Download URL: elasticsearch_ir_evaluator-0.3.0.tar.gz
- Upload date:
- Size: 15.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d8d36f806568c35244ac54e7f554a49096c0ceb2e81327152c7b021e3eccbebb |
|
MD5 | 74d3012c266343895bae921fd2f0f9f8 |
|
BLAKE2b-256 | 5330e13fed25503832846ebf93616e180fac251bc3e8d2137b07f3b9b1d7e449 |
File details
Details for the file elasticsearch_ir_evaluator-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: elasticsearch_ir_evaluator-0.3.0-py3-none-any.whl
- Upload date:
- Size: 15.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 70027151255307d09fd96842f16ee3d87ee71596dad368f5d3a18925b0952854 |
|
MD5 | c5ce60e0079ac976e212aa2c217fea0f |
|
BLAKE2b-256 | f987ef758f286722c553c1b251ec0511544c2075129188d0f659c0d97d5d4863 |