Skip to main content

Visual and statistical assessment of annotator agreements

Project description

disagree - Assessing Annotator Disagreements in Python

This library aims to address annotation disagreements in manually labelled data.

I started it as a project to develop some understanding of Python packaging and workflow. (This is the primary reason for the messy release history and commit logs, for which I apologise.) But I hope this will be useful for a wider audience as well.

Install

To install, setup a virtualenv and do:

$ python3 -m pip install --index-url https://pypi.org/project/ disagree

or

$ pip3 install disagree

To update to the latest version do:

$ pip3 install --upgrade disagree

Background

Whilst working in NLP, I've been repeatedly working with datasets that have been manually labelled, and have thus had to evaluate the quality of the agreements between the annotators. In my (limited) experience of doing this, I have encountered a number of ways of it that have been helpful. In this library, I aim to group all of those things together for people to use.

Please suggest any additions/functionalities, and I will try my best to add them.

Summary of features

  • Visualisations

    • Ability to visualise bidisagreements between annotators
    • Ability to visualise agreement statistics
    • Retrieve summaries of numbers of disagreements and their extent
  • Annotation statistics:

    • Joint probability
    • Cohens kappa
    • Fleiss kappa
    • Pearson, Spearman, Kendall correlations
    • Krippendorff's alpha

Python examples

Worked examples are provided in the Jupyter notebooks directory.

Documentation

disagree.BiDisagreements(df, labels)

BiDisagreements class is primarily there for you to visualise the disagreements in the form of a matrix, but has some other small functionalities.

There are some quite strict requirements with regards to the parameters here. (See usage example in notebooks or top of source code.)

  • df: Pandas DataFrame containing annotator labels

    • Rows: Instances of the data that is labelled
    • Columns: Annotators
    • Element [i, j] is annotator j's label for data instance i.
    • Entries must be integers, floats, or pandas nan values
    • The lowest label must be 0. E.g. if your labels are 1-5, convert them to 0-4.
  • labels: list containing possible labels

    • Must be from 0 to the maximum label. If your labels are words then please convert them to corresponding integers.
    • Example: If the labels are [male, female, trans], you must convert to [0, 1, 2]
  • Attributes:

    • agreements_summary()
      • This will print out statistics on the number of instances with no disagreements, the number of bidisagreements, the number of tridisagreements, and the number of instances with worse cases (i.e. 3+ disagreements).
    • agreements_matrix()
      • This will return a matrix of bidisagreements. Do with this what you will! The intention is that you use something like matplotlib to visualise them properly.
      • Element $(i, j)$ is the number of times there is a bidisagreement involving label $i$ and label $j$.

disagree.metrics.Metrics(df, labels)

This module gives you access to a number of metrics typically used for annotation disagreement statistics.

See above for df and labels args.

  • Attributes:
    • joint_probability(ann1, ann2)

      • Parameter: ann1, string, name of one of the annotators from the DataFrame columns
      • Parameter: ann2, string, name of one of the annotators from the DataFrame columns
      • This gives the join probability of agreement between ann1 and ann2. You should probably not use this measure for academic purposes, but is here for completion.
    • cohens_kappa(ann1, ann2):

      • Parameter: ann1, string, name of one of the annotators from the DataFrame columns
      • Parameter: ann2, string, name of one of the annotators from the DataFrame columns
    • fliess_kappa()

      • No args
    • correlation(ann1, ann2, measure="pearson")

      • Parameter: ann1, string, name of one of the annotators from the DataFrame columns
      • Parameter: ann2, string, name of one of the annotators from the DataFrame columns
      • Paramater: measure, string, optional
        • Options: (pearson (default), kendall, spearman)
      • This gives you either pearson , kendall, or spearman correlation statistics between two annotators
    • metric_matrix(func)

      • Returns a matrix of size (num_annotators x num_annotators). Element $(i, j)$ is the statistic value for agreements between annotator $i$ and annotator $j$.
      • Parameter: func, name of function for the metric you want to visualise.
        • Options: (metrics.Metrics.cohens_kappa, metrics.Metrics.joint_probability)

disagree.metrics.Krippendorff(df, labels)

See above for df and labels args.

  • Attributes
    • alpha(data_type="nominal")
      • In this library, Krippendorff's alpha can handle four data types, one of which must be specified:
        • nominal (default)
        • ordinal
        • interval
        • ratio

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

disagree-1.2.6.tar.gz (5.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

disagree-1.2.6-py3-none-any.whl (3.9 kB view details)

Uploaded Python 3

File details

Details for the file disagree-1.2.6.tar.gz.

File metadata

  • Download URL: disagree-1.2.6.tar.gz
  • Upload date:
  • Size: 5.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.5

File hashes

Hashes for disagree-1.2.6.tar.gz
Algorithm Hash digest
SHA256 3c55fbd2088e57a41fc9750a15e1305f0d095c221531d17d8fa4d2eb6fc6e96d
MD5 9f1ec923a3e56da713211df467cf3c91
BLAKE2b-256 a0170de50073aab8492ba9893bff7b1638636bdba1745df5c8605a7d862072be

See more details on using hashes here.

File details

Details for the file disagree-1.2.6-py3-none-any.whl.

File metadata

  • Download URL: disagree-1.2.6-py3-none-any.whl
  • Upload date:
  • Size: 3.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.8.5

File hashes

Hashes for disagree-1.2.6-py3-none-any.whl
Algorithm Hash digest
SHA256 6c013b1ba2ceff173ac88a9ce2ad6c144ffacd34356282aaf846e16bfa8140e5
MD5 036cc679fcdef487c5eec520f749021d
BLAKE2b-256 614709139919f718441e5f11e298a5ce0571d31ff7336a213a09f65fb53b1da2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page