Skip to main content

NER Error Analysis for column (conll format) dataset including CoNLL-2003, WNUT-2017, ...

Project description

NER Error Analyzer

Quick Start

from nlu.error import *
from nlu.parser import *

cols_format = [{'type': 'predict', 'col_num': 1, 'tagger': 'ner'},
                {'type': 'gold', 'col_num': 2, 'tagger': 'ner'}]

parser = ConllParser('', cols_format)

parser.obtain_statistics(entity_stat=True, source='predict')

parser.obtain_statistics(entity_stat=True, source='gold')






see the section Input Format below to know what the input format is



from nlu.error import *
from nlu.parser import *

Create a ConllParser instance first with the input of the file path with specifying the column number in cols_format field


cols_format = [{'type': 'predict', 'col_num': 1, 'tagger': 'ner'},
                {'type': 'gold', 'col_num': 2, 'tagger': 'ner'}]

parser = ConllParser('', cols_format)

obtain the basic statistics by obtain_statistics() method

parser.obtain_statistics(entity_stat=True, source='predict')

parser.obtain_statistics(entity_stat=True, source='gold')

To "Annotate" NER Errors in the documents inside ConllParser


To print out all corrects/errors, use

parser.print_corrects() or parser.print_all_errors()

or use the function error_overall_stats() method to get the stats

Input File Format

The input file format of ConllParser is following the column format used by Conll03.

For example,

Natural I-ORG O
Language I-ORG O
Laboratory I-ORG I-ORG

where the first column is the text, the second and the third are the predicted and the ground truth tag respectively, where the order can be specified in the keyword cols_format in ConllParser in instantialization:

cols_format = [{'type': 'predict', 'col_num': 1, 'tagger': 'ner'},
               {'type': 'gold', 'col_num': 2, 'tagger': 'ner'}]  # col_num starts from 0

I recommend to use shell command awk '{print $x}' filepath to obtain the x-th column, like awk '{print $4} filepath' to obtain the 4-th column.

And use paste file1.txt file2.txt to concatenate two files.

For example,

awk '{print $4}' eng.train > ner_tags_file  # $num starts from 1
paste ner_pred_tags_file ner_tags_file

Types of Span Errors

Types Number of Mentions (Predicted and Gold) Subtypes Examples Notes
Missing Mention
(False Negative)
1 TYPES→O [] → None # todo
Extra Mention
(False Positive)
1 O→TYPES None → [...] # todo
Mention with Wrong Type
(Type Errors)
≥ 2 TYPES-> TYPES - self
( {(p, g) | p ∈ T, g ∈ T - p } )
[PER...] → [ORG...] # todo But the spans are the same
Missing Tokens 2 L/ R/ LR Diminished [MISC1991 World Cup] → [MISC1991] [MISC World Cup] also possible with type errors
Extra Tokens 2 L/R/LR Expanded [...] → [......] # todo also possible with type errors
Missing + Extra Tokens 2 L/R Crossed ..[...].. → .[..]... also possible with type errors
Conflated Mention ≥ 3 [][][] → [] # todo also possible with type errors
Divided Mention ≥ 3 [MISC1991 World Cup] → [MISC1991] [MISC World Cup]
[PERBarack Hussein Obama] → [PERBarack][PERHussein][PERObama]
also possible with type errors
Complicated Case ≥ 3 [][][] → [][] # todo also possible with type errors
Ex -
Mention with Wrong Segmentation
(Same overall range but wrong segmentation)
≥ 4 [...][......][.] → [......][.....] also possible with type errors

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ner_error_analysis-0.2.tar.gz (35.0 kB view hashes)

Uploaded source

Built Distribution

ner_error_analysis-0.2-py3-none-any.whl (36.0 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page