Auto metrics for evaluating generated questions
Project description
How to use
Our codes provide the ability to evaluate automatic metrics
which concludes the ability to calculate automatic metrics
. Please follow these steps to calculate automatic QG metrics and evaluate automatic metrics on our benchmark.
Enviroment
run pip install -r requirements.txt
to install the required packages.
Calculate Automatic Metrics
-
Prepare data
Use the data we provided at ../data/scores.xlsx, or use your own data, which should provide passages, answers, and references.
-
Calculate automatic metrics.
-
Download models at
coming soon
for metrics. -
Update model path inside the codes. See
QRelScore
as an example.# update the path of mlm_model and clm_model def corpus_qrel(preds, contexts, device='cuda'): assert len(contexts) == len(preds) mlm_model = 'model/bert-base-cased' clm_model = 'model/gpt2' scorer = QRelScore(mlm_model=mlm_model, clm_model=clm_model, batch_size=16, nthreads=4, device=device) scores = scorer.compute_score_flatten(contexts, preds) return scores
-
Run
python metrics.py
to calculate your assigned metrics results by changingscore_names
inmetrics.py
. (data_path
in each file should be changed into your own data path)# Run QRelScore and RQUGE based on our dataset # load data data_path = '../data/scores.xlsx' save_path = './result/metric_result.xlsx' data = pd.read_excel(data_path) hypos = data['prediction'].tolist() refs_list = [data['reference'].tolist()] contexts = data['passage'].tolist() answers = data['answer'].tolist() # scores to use score_names = ['QRelScore', 'RQUGE'] # run metrics res = get_metrics(hypos, refs_list, contexts, answers, score_names=score_names) # handle results for k, v in res.items(): data[k] = v print(data.columns) # save results data.to_excel(save_path, index=False)
-
or run the code file for specific metric to calculate. For example, run
python qrel.py
to calculate QRelScore results.
-
Evaluate Automatic Metrics
Run python coeff.py
to obtain the Pearson, Spearman, and Kendall correlation coefficient between the generated results and the labeled results. For detailed process, please refer to readme of QGEval.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file QGEval_metrics-1.0.3.tar.gz
.
File metadata
- Download URL: QGEval_metrics-1.0.3.tar.gz
- Upload date:
- Size: 30.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d1715127a7cb17c6137814075cb0a863dcbce35a2896e97d3d9b1f9f746ca97 |
|
MD5 | 3f8216fa409a8c798ece3041e5dd192b |
|
BLAKE2b-256 | 23e677b59ac4492b8da3df9e6adefdc48cec7f858efad3ca83cfdc719be2b5cf |
File details
Details for the file QGEval_metrics-1.0.3-py3-none-any.whl
.
File metadata
- Download URL: QGEval_metrics-1.0.3-py3-none-any.whl
- Upload date:
- Size: 36.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 61ca1204a0fc7aa926333cc0b599015a4a445f960a3435c2056d21b2a10afbec |
|
MD5 | b6c50ce51e532e5e1a7afb0cff8debb6 |
|
BLAKE2b-256 | fc8a1c31aa9fdd52f1d47212065dfbbd0c6e7018667132bbafc3b3a45893730f |