Skip to main content

Alternative scorer for the CoNLL-2011/2012 shared tasks on coreference resolution.

Project description

Scorch¹

Build Status PyPI Code style: black

This is an alternative implementation of the coreference scorer for the CoNLL-2011/2012 shared tasks on coreference resolution.

It aims to be more straightforward than the reference implementation, while maintaining as much compatibility with it as possible.

The implementations of the various scores are as close as possible from the formulas used by Pradhan et al. (2014), with the edge cases for BLANC taken from Recasens and Hovy (2011).


1. Scorer for coreference chains.

Usage

scorch gold.json sys.json out.txt

Install

From the cheeseshop

python3 -m pip install --user scorch

Or directly from git

python3 -m pip install --user git+https://github.com/LoicGrobol/scorch.git

Formats

Single document

The input files should be JSON files with a "type" key at top-level

  • If "type" is "graph", then top-level should have at top-level
    • A "mentions" key containing a list of all mention identifiers
    • A "links" key containing a list of pairs of corefering mention identifiers
  • If "type" is "clusters", then top-level should have a "clusters" key containing a mapping from clusters ids to cluster contents (as lists of mention identifiers).

Of course the system and gold files should use the same set of mention identifiers for the mentions they have in common.

For convenience, the conll.py converts CoNLL-2012 files to this format.

Multiple documents

If the inputs are directories, files with the same base name (excluding extension) as those present in the sys directory are expected to be present in the gold directory, with exactly one gold file for each sys file. In that case, the output scores will be the micro-average of the individual files scores, ie their arithmetic means weighted by the relative numbers of

  • Gold mentions for Recall
  • System mentions for Precision
  • The sum of the previous two for F₁

This is different from the reference interpretation where

  • MUC weighting ignores mentions in singleton entities
    • This should not make any difference for the CoNLL-2012 dataset, since singleton entities are not annotated.
    • For datasets with singletons, the shortcomings of MUC are well known, so this score shouldn't matter much
  • BLANC is calculated by micro-averaging coreference and non-coreference separately, using the number of links as weights instead of the number of mentions.
    • This is roughly equivalent to weighting coreference scores per document by their number of non-singleton clusters and non-coreference scores by the square of their number of mentions. This give disproportionate importance to large documents, which is not desirable in heterogenous corpora

The CoNLL average score is the arithmetic mean of the global MUC, B³ and CEAFₑ F₁ scores.

Sources

License

Unless otherwise specified (see below), the following licence (the so-called “MIT License”) applies to all the files in this repository. See also LICENSE.md.

Copyright 2018 Loïc Grobol <loic.grobol@gmail.com>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute,
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT
NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT
OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

License exceptions

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scorch-0.2.0.tar.gz (17.7 kB view details)

Uploaded Source

Built Distribution

scorch-0.2.0-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file scorch-0.2.0.tar.gz.

File metadata

  • Download URL: scorch-0.2.0.tar.gz
  • Upload date:
  • Size: 17.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for scorch-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9846e4a78ada585bc0458b0dc7fc7b3c156f7fa407e1d0625151efe8a38b8031
MD5 8d65a9d2175cc2162271c96a1205bb30
BLAKE2b-256 1cd3fa59d58648a0217db0e0d6655a94150d05446210ac758facef4cef2e3daa

See more details on using hashes here.

File details

Details for the file scorch-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: scorch-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 16.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/50.3.2 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.6

File hashes

Hashes for scorch-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3d450229a5e8a2219a27ed30e19c922a5bb125f1714ca2887469bb46e0715800
MD5 dcabca620361e20a6cb98c6ffe18a663
BLAKE2b-256 d5b0a651d91d0278a373fd6e7c5ee1300025a2ea4214fb3c3830352912f309ab

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page