A python library for named entity recognition evaluation
Project description
miNER
A python library for NER (Named Entity Recognition) evaluation
We can evaluate the performance of NER by distinguishing between known entities and unknown entities using this library.
Support
- Tagging Scheme
- IOB2
- BIOES
- BIOUL
- metrics
- precision
- recall
- f1
Requirements
- python3
- cython
Installation
pip install mi-ner
Usage
Sample
>>> from miner import Miner >>> answers = [ 'B-PSN O O B-LOC O O O O'.split(' '), 'B-PSN I-PSN O O B-LOC I-LOC O O O O'.split(' '), 'S-PSN O O S-PSN O O B-LOC I-LOC E-LOC O O O O'.split(' ') ] >>> predicts = [ 'B-PSN O O B-LOC O O O O'.split(' '), 'B-PSN B-PSN O O B-LOC I-LOC O O O O'.split(' '), 'S-PSN O O O O O B-LOC I-LOC E-LOC O O O O'.split(' ') ] >>> sentences = [ '花子 さん は 東京 に 行き まし た'.split(' '), '山田 太郎 君 は 東京 駅 に 向かい まし た'.split(' '), '花子 さん と ボブ くん は 東京 スカイ ツリー に 行き まし た'.split(' '), ] >>> knowns = {'PSN': ['花子'], 'LOC': ['東京']} # known words (words included in training data) >>> m = Miner(answers, predicts, sentences, knowns) >>> m.default_report(True) precision recall f1_score num PSN 0.500 0.500 0.500 4 LOC 1.000 1.000 1.000 3 {'PSN': {'precision': 0.5, 'recall': 0.5, 'f1_score': 0.5, 'num': 4}, 'LOC': {'precision': 1.0, 'recall': 1.0, 'f1_score': 1.0, 'num': 3}} >>> m.return_predict_named_entities() {'known': {'PSN': ['花子'], 'LOC': ['東京']}, 'unknown': {'PSN': ['太郎', '山田'], 'LOC': ['東京駅', '東京スカイツリー']}}
Methods
method | description |
---|---|
default_report(print_) | return result of named entity recognition. if print_=True, showing result |
known_only_report(print_) | return result of known named entity recognition. |
unknown_only_report(print_) | return result of unknown named entity recognition. |
return_predict_named_entities() | return named entities along predicted label(predicts). |
return_answer_named_entities() | return named entities along answer label(answer). |
return_miss_labelings() | return miss labeling sentences. |
segmentation_score(mode) | show parcentages of matching answer and predict labels. if known or unknown for mode , return labeling accuracy for known or unknown NE. |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size | File type | Python version | Upload date | Hashes |
---|---|---|---|---|
Filename, size mi_ner-0.3.0-cp37-cp37m-macosx_10_14_x86_64.whl (28.1 kB) | File type Wheel | Python version cp37 | Upload date | Hashes View |
Filename, size mi-ner-0.3.0.tar.gz (5.8 kB) | File type Source | Python version None | Upload date | Hashes View |
Close
Hashes for mi_ner-0.3.0-cp37-cp37m-macosx_10_14_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 73db2a8e0cd10e47de2bdd062a37b21fbf61e49dc5a0e959320b071dcf0d2069 |
|
MD5 | d43d330aa81f02c4259badb20cdd7ec4 |
|
BLAKE2-256 | d8c4e358d10f3b418a784d23201a12821e5dd9472b850cd8625286739d83a5a6 |