A simple Python snippets for NER evaluation
Project description
NER_eval
A simple implementation of strict/lenient matching to evaluate NER performance (precision, recall, F1-score) in 60 lines!
This script currently only supports the IOB2 format with both strict and lenient modes.
Installation
pip install ner_metrics
or
pip install git+https://github.com/PL97/NER_eval.git
Usage
from ner_metrics import classification_report
y_true = ['B-PER', 'I-PER', 'O', 'B-ORG', 'B-ORG', 'O', 'O', 'B-PER', 'I-PER', 'O']
y_pred = ['O', 'B-PER', 'O', 'B-ORG', 'B-ORG', 'I-ORG', 'O', 'B-PER', 'I-PER', 'O']
classification_report(tags_true=y_true, tags_pred=y_pred, mode="lenient") # for lenient match
classification_report(tags_true=y_true, tags_pred=y_pred, mode="strict") # for strict match
Expected output
tag(lenient): PER precision:1.0 recall:1.0 f1-score:1.0
tag(strict): PER precision:0.5 recall:0.5 f1-score:0.5
tag(lenient): ORG precision:1.0 recall:1.0 f1-score:1.0
tag(strict): ORG precision:0.5 recall:0.5 f1-score:0.5
The results are also saved to evaluation.json
How to cite this work
If you find this git repo useful, please consider citing it using the snippet below:
@misc{ner_eval,
author={Le Peng},
title={ner_metrics: A Simple Python Snippets for NER Evaluation},
howpublished={\url{https://github.com/PL97/NER_eval}},
year={2022}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ner_metrics-0.1.2.tar.gz
(3.8 kB
view hashes)
Built Distribution
Close
Hashes for ner_metrics-0.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8e1674eac5aaf66ea932d14159bb1c2125f85a20caba17c4db376b9c76bfc875 |
|
MD5 | 7eef736d65f1ebd6169eef9af46a2817 |
|
BLAKE2b-256 | 8c61a576c4cf8fade70cc01395343796ba014493d729bbea3a3676f7903d61b2 |