Alternative scorer for the CoNLL-2011/2012 shared tasks on coreference resolution.

# Scorch¹

This is an alternative implementation of the coreference scorer for the CoNLL-2011/2012 shared tasks on coreference resolution.

It aims to be more straightforward than the reference implementation, while maintaining as much compatibility with it as possible.

The implementations of the various scores are as close as possible from the formulas used by Pradhan et al. (2014), with the edge cases for BLANC taken from Recasens and Hovy (2011).

1. Scorer for coreference chains.

## Use

git clone https://github.com/LoicGrobol/scorch.git


Install with

python3 -m pip install .


Then just use scorch, e.g.

scorch gold.json sys.json out.txt


Alternatively, just running scorch.py without installing should work as long as you have all the dependencies installed

python3 scorch.py -h


## Formats

### Single document

The input files should be JSON files with a "type" key at top-level

• If "type" is "graph", then top-level should have at top-level
• A "mentions" key containing a list of all mention identifiers
• A "links" key containing a list of pairs of corefering mention identifiers
• If "type" is "clusters", then top-level should have a "clusters" key containing a mapping from clusters ids to cluster contents (as lists of mention identifiers).

Of course the system and gold files should use the same set of mention identifiers…

### Multiple documents

If the inputs to directories, files with the same base name (excluding extension) as those present in the gold directory are expected to be present in the sys directory, with exactly one sys file for each gold file. In that case, the output scores will be the micro-average of the individual files scores, ie their arithmetic means weighted by the relative numbers of

• Gold mentions for Recall
• System mentions for Precision
• The sum of the previous two for F₁

This is different from the reference interpretation where

• MUC weighting ignores mentions in singleton entities
• This should not make any difference for the CoNLL-2012 dataset, since singleton entities are not annotated.
• For datasets with singletons, the shortcomings of MUC are well known, so this score shouldn't matter much
• BLANC is calculated by micro-averaging coreference and non-coreference separately, using the number of links as weights instead of the number of mentions.

The CoNLL average score is the arithmetic mean of the global MUC, B³ and CEAFₑ F₁ scores.

## Sources

Unless otherwise specified (see below), the following licence (the so-called “MIT License”) applies to all the files in this repository. See also LICENSE.md.

Copyright 2018 Loïc Grobol <loic.grobol@gmail.com>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute,
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or
substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT
NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT
OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


## Project details

Uploaded source
Uploaded py3