A package for NER evaluation
Project description
NER-evaluation
This is a Python implementation of NER MUC evaluation. Refer to the blog Evaluation Metrics of Name Entity Recognition for explanations of MUC metric.
Installation
pip install eval4ner
Usage
- Evaluate single prediction
import eval4ner.muc as muc
import pprint
grount_truth = [('PER', 'John Jones'), ('PER', 'Peter Peters'), ('LOC', 'York')]
prediction = [('PER', 'John Jones and Peter Peters came to York')]
text = 'John Jones and Peter Peters came to York'
one_result = muc.evaluate_one(prediction, grount_truth, text)
pprint.pprint(one_result)
Output
{'exact': {'actual': 1,
'correct': 0,
'f1_score': 0,
'incorrect': 1,
'missed': 2,
'partial': 0,
'possible': 3,
'precision': 0.0,
'recall': 0.0,
'spurius': 0},
'partial': {'actual': 1,
'correct': 0,
'f1_score': 0.25,
'incorrect': 0,
'missed': 2,
'partial': 1,
'possible': 3,
'precision': 0.5,
'recall': 0.16666666666666666,
'spurius': 0},
'strict': {'actual': 1,
'correct': 0,
'f1_score': 0,
'incorrect': 1,
'missed': 2,
'partial': 0,
'possible': 3,
'precision': 0.0,
'recall': 0.0,
'spurius': 0},
'type': {'actual': 1,
'correct': 1,
'f1_score': 0.5,
'incorrect': 0,
'missed': 2,
'partial': 0,
'possible': 3,
'precision': 1.0,
'recall': 0.3333333333333333,
'spurius': 0}}
- Evaluate all predictions
import eval4ner.muc as muc
# ground truth
grount_truths = [
[('PER', 'John Jones'), ('PER', 'Peter Peters'), ('LOC', 'York')],
[('PER', 'John Jones'), ('PER', 'Peter Peters'), ('LOC', 'York')],
[('PER', 'John Jones'), ('PER', 'Peter Peters'), ('LOC', 'York')]
]
# NER model prediction
predictions = [
[('PER', 'John Jones and Peter Peters came to York')],
[('LOC', 'John Jones'), ('PER', 'Peters'), ('LOC', 'York')],
[('PER', 'John Jones'), ('PER', 'Peter Peters'), ('LOC', 'York')]
]
# input texts
texts = [
'John Jones and Peter Peters came to York',
'John Jones and Peter Peters came to York',
'John Jones and Peter Peters came to York'
]
muc.evaluate_all(predictions, grount_truths * 1, texts, verbose=True)
Output:
NER evaluation scores:
strict mode, Precision=0.4444, Recall=0.4444, F1:0.4444
exact mode, Precision=0.5556, Recall=0.5556, F1:0.5556
partial mode, Precision=0.7778, Recall=0.6667, F1:0.6944
type mode, Precision=0.8889, Recall=0.6667, F1:0.7222
Cite
@misc{eval4ner,
title={eval4ner},
author={Yekun Chai},
year={2018},
howpublished={\url{https://cyk1337.github.io/notes/2018/11/21/NLP/NER/Evaluation-metrics-of-Name-Entity-Recognition-systems/}},
}
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
eval4ner-0.0.5.tar.gz
(5.6 kB
view hashes)
Built Distributions
Close
Hashes for eval4ner-0.0.5-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd64f210f1c5fadc6ac0f660d466f76d3e4b8e4a4352688eba451d4183386486 |
|
MD5 | 4540e5786d2b2e00e96acc186fd5ac55 |
|
BLAKE2b-256 | 1cf2093f539c68ef494c2f4c17cbd6b8bcba8c883f4dbfc444dda231ec2c30a3 |