Skip to main content

Testing framework for sequence labeling

Project description

seqeval

seqeval is a Python framework for sequence labeling evaluation. seqeval can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.

This is well-tested by using the Perl script conlleval, which can be used for measuring the performance of a system that has processed the CoNLL-2000 shared task data.

Support features

seqeval supports following formats:

  • IOB1
  • IOB2
  • IOE1
  • IOE2
  • IOBES

and supports following metrics:

metrics description
accuracy_score(y_true, y_pred) Compute the accuracy.
precision_score(y_true, y_pred) Compute the precision.
recall_score(y_true, y_pred) Compute the recall.
f1_score(y_true, y_pred) Compute the F1 score, also known as balanced F-score or F-measure.
classification_report(y_true, y_pred, digits=2) Build a text report showing the main classification metrics. digits is number of digits for formatting output floating point values. Default value is 2.

Usage

Behold, the power of seqeval:

>>> from seqeval.metrics import accuracy_score
>>> from seqeval.metrics import classification_report
>>> from seqeval.metrics import f1_score
>>> 
>>> y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>> y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
>>>
>>> f1_score(y_true, y_pred)
0.50
>>> accuracy_score(y_true, y_pred)
0.80
>>> classification_report(y_true, y_pred)
              precision    recall  f1-score   support

        MISC       0.00      0.00      0.00         1
         PER       1.00      1.00      1.00         1

   micro avg       0.50      0.50      0.50         2
   macro avg       0.50      0.50      0.50         2
weighted avg       0.50      0.50      0.50         2

If you want to explicitly specify the evaluation scheme, use mode='strict':

>>> from seqeval.scheme import IOB2
>>> classification_report(y_true, y_pred, mode='strict', scheme=IOB2)
              precision    recall  f1-score   support

        MISC       0.00      0.00      0.00         1
         PER       1.00      1.00      1.00         1

   micro avg       0.50      0.50      0.50         2
   macro avg       0.50      0.50      0.50         2
weighted avg       0.50      0.50      0.50         2

Note: The behavior of the strict mode is different from the default one which is designed to simulate conlleval.

Installation

To install seqeval, simply run:

$ pip install seqeval

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

seqeval-1.0.0.tar.gz (36.3 kB view details)

Uploaded Source

File details

Details for the file seqeval-1.0.0.tar.gz.

File metadata

  • Download URL: seqeval-1.0.0.tar.gz
  • Upload date:
  • Size: 36.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.6.7

File hashes

Hashes for seqeval-1.0.0.tar.gz
Algorithm Hash digest
SHA256 987d065ebaaca050f26089b8f20e8254f4b268ad2b90a7b7d4c0744abd644078
MD5 d9fb5b2bee1692ebd4df46f4300ff802
BLAKE2b-256 7563180f7556bfd9b0f89bc853ee46d01a3a611d74798230a930afac6873d15c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page