This package provides standard and classifier-based short form QA evaluation methods
Project description
QA-Evaluation-Metrics
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper PANDA, an efficient QA evaluation that retains competitive evaluation performance of transformer LLM models.
Installation
To install the package, run the following command:
pip install qa-metrics
Usage
The python package currently provides four QA evaluation metrics.
Exact Match
from qa_metrics.em import em_match
reference_answer = ["The Frog Prince", "The Princess and the Frog"]
candidate_answer = "The movie \"The Princess and the Frog\" is loosely based off the Brother Grimm's \"Iron Henry\""
match_result = em_match(reference_answer, candidate_answer)
print("Exact Match: ", match_result)
'''
Exact Match: False
'''
F1 Score
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall
f1_stats = f1_score_with_precision_recall(reference_answer[0], candidate_answer)
print("F1 stats: ", f1_stats)
'''
F1 stats: {'f1': 0.25, 'precision': 0.6666666666666666, 'recall': 0.15384615384615385}
'''
match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
print("F1 Match: ", match_result)
'''
F1 Match: False
'''
PANDA Match
from qa_metrics.pedant import PEDANT
question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
pedant = PEDANT()
scores = pedant.get_scores(reference_answer, candidate_answer, question)
max_pair, highest_scores = pedant.get_highest_score(reference_answer, candidate_answer, question)
match_result = pedant.evaluate(reference_answer, candidate_answer, question)
print("Max Pair: %s; Highest Score: %s" % (max_pair, highest_scores))
print("Score: %s; PANDA Match: %s" % (scores, match_result))
'''
Max Pair: ('the princess and the frog', 'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"'); Highest Score: 0.854451712151719
Score: {'the frog prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7131625951317375}, 'the princess and the frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.854451712151719}}; PANDA Match: True
'''
print(pedant.get_score(reference_answer[0], candidate_answer, question))
'''
0.7122460127464126
'''
Transformer Match
Our fine-tuned BERT model is on 🤗 Huggingface. Our Package also supports downloading and matching directly. distilroberta, distilbert, roberta, and roberta-large are also supported now! 🔥🔥🔥
from qa_metrics.transformerMatcher import TransformerMatcher
question = "Which movie is loosley based off the Brother Grimm's Iron Henry?"
tm = TransformerMatcher("bert")
scores = tm.get_scores(reference_answer, candidate_answer, question)
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
print("Score: %s; bert Match: %s" % (scores, match_result))
'''
Score: {'The Frog Prince': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.6934309}, 'The Princess and the Frog': {'The movie "The Princess and the Frog" is loosely based off the Brother Grimm\'s "Iron Henry"': 0.7400551}}; TM Match: True
'''
If you find this repo avialable, please cite:
@misc{li2024panda,
title={PANDA (Pedantic ANswer-correctness Determination and Adjudication):Improving Automatic Evaluation for Question Answering and Text Generation},
author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Lee Boyd-Graber},
year={2024},
eprint={2402.11161},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Updates
- [01/24/24] 🔥 The full paper is uploaded and can be accessed here. The dataset is expanded and leaderboard is updated.
- Our Training Dataset is adapted and augmented from Bulian et al. Our dataset repo includes the augmented training set and QA evaluation testing sets discussed in our paper.
- Now our model supports distilroberta, distilbert, a smaller and faster matching model than Bert!
- Now our model supports roberta, roberta-large, a larger and more robust matching model than Bert!
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contact
For any additional questions or comments, please contact [zli12321@umd.edu].
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for qa_metrics-0.1.30-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 66bcf8cb5d6a4eb98da338ec0ec7566b18b539ce733a4c54019a2bf901ef215a |
|
MD5 | 31943724072bbfa14abcad48ceead762 |
|
BLAKE2b-256 | e73cda514b5c315ab62af6d798ba501c62cd2c61f258fc5131b09518ffee684e |