This package provides standard and classifier-based short form QA evaluation methods
Project description
QA-Evaluation-Metrics
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models. It provides various basic metrics to assess the performance of QA models. Check out our paper CFMatcher, a matching method going beyond token-level matching and is more efficient than LLM matchings but still retains competitive evaluation performance of transformer LLM models.
Installation
To install the package, run the following command:
pip install qa-metrics
Usage
The python package currently provides four QA evaluation metrics.
Exact Match
from qa_metrics.em import em_match
reference_answer = ["Charles , Prince of Wales"]
candidate_answer = "Prince Charles"
match_result = em_match(reference_answer, candidate_answer)
print("Exact Match: ", match_result)
F1 Score
from qa_metrics.f1 import f1_match,f1_score_with_precision_recall
f1_stats = f1_score_with_precision_recall(reference_answer[0], candidate_answer)
print("F1 stats: ", f1_stats)
match_result = f1_match(reference_answer, candidate_answer, threshold=0.5)
print("F1 Match: ", match_result)
CFMatch
from qa_metrics.cfm import CFMatcher
question = "who will take the throne after the queen dies"
cfm = CFMatcher()
scores = cfm.get_scores(reference_answer, candidate_answer, question)
match_result = cfm.cf_match(reference_answer, candidate_answer, question)
print("Score: %s; CF Match: %s" % (scores, match_result))
Transformer Match
Our fine-tuned BERT model is on 🤗 Huggingface. Our Package also supports downloading and matching directly. distilroberta, distilbert, and roberta are also supported now! 🔥🔥🔥
from qa_metrics.transformerMatcher import TransformerMatcher
question = "who will take the throne after the queen dies"
tm = TransformerMatcher("bert")
scores = tm.get_scores(reference_answer, candidate_answer, question)
match_result = tm.transformer_match(reference_answer, candidate_answer, question)
print("Score: %s; bert Match: %s" % (scores, match_result))
If you find this repo avialable, please cite:
@misc{li2024cfmatch,
title={CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering},
author={Zongxia Li and Ishani Mondal and Yijun Liang and Huy Nghiem and Jordan Boyd-Graber},
year={2024},
eprint={2401.13170},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Updates
- [01/24/24] 🔥 The full paper is uploaded and can be accessed here. The dataset is expanded and leaderboard is updated.
- Our Training Dataset is adapted and augmented from Bulian et al. Our dataset repo includes the augmented training set and QA evaluation testing sets discussed in our paper.
- Now our model supports distilroberta, distilbert, a smaller and faster matching model than Bert!
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contact
For any additional questions or comments, please contact [zli12321@umd.edu].
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for qa_metrics-0.1.25-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 39da170e46597c6794bd2ba0179a95f05eb3a66729b334fe4cfbe278f295ec67 |
|
MD5 | 4aa62ec23c2898973906547128d15e13 |
|
BLAKE2b-256 | 65cc3bb7d369d8d4a1a9a8ce575d4f04f0aff91da756b77925d7d442c5534d50 |