Skip to main content

A framework to predict the quality of a multi-label classification result

Project description

Qualle

Extended Tests codecov Code style: black

This is an implementation of the Qualle framework as proposed in the paper [1] and accompanying source code.

The framework allows to train a model which can be used to predict the quality of the result of applying a multi-label classification (MLC) method on a document. In this implementation, only the recall is predicted for a document, but in principle any document-level quality estimation (such as the prediction of precision) can be implemented analogously.

Qualle provides a command-line interface to train and evaluate models. In addition, a REST webservice for predicting the recall of a MLC result is provided.

Requirements

Python >= 3.9 is required.

Installation

Choose one of these installation methods:

With pip

Qualle is available on PyPI . You can install Qualle using pip:

pip install qualle

This will install a command line tool called qualle . You can call qualle -h to see the help message which will display the available modes and options.

Note that it is generally recommended to use a virtual environment to avoid conflicting behaviour with the system package manager.

From source

You also have the option to checkout the repository and install the packages from source. You need poetry to perform the task:

# call inside the project directory
poetry install --without ci

Docker

You can also use a Docker Image from the Container Registry of Github:

docker pull ghcr.io/zbw/qualle

Alternatively, you can use the Dockerfile included in this project to build a Docker image yourself. E.g.:

docker build -t qualle .

By default, a container built from this image launches a REST interface listening on 0.0.0.0:8000

You need to pass the model file (see below the section REST interface) per bind mount or volume to the docker container. Beyond that, you need to specify the location of the model file with an environment variable named MODEL_FILE:

docker run --rm -it --env MODEL_FILE=/model -v /path/to/model:/model -p 8000:8000 ghcr.io/zbw/qualle

Gunicorn is used as HTTP Server. You can use the environment variable GUNICORN_CMD_ARGS to customize Gunicorn settings, such as the number of worker processes to use:

docker run --rm -it --env MODEL_FILE=/model --env GUNICORN_CMD_ARGS="--workers 4" -v /path/to/model:/model -p 8000:8000 ghcr.io/zbw/qualle

You can also use the Docker image to train or evaluate by using the Qualle command line tool:

docker run --rm -it -v \
 /path/to/train_data_file:/train_data_file -v /path/to/model_dir:/model_dir ghcr.io/zbw/qualle \
 qualle train /train_data_file /model_dir/model

The Qualle command line tool is not available for the release 0.1.0 and 0.1.1. For these releases, you need to call the python module qualle.main instead:

docker run --rm -it -v \
 /path/to/train_data_file:/train_data_file -v /path/to/model_dir:/model_dir ghcr.io/zbw/qualle:0.1.1 \
 python -m qualle.main train /train_data_file /model_dir/model

Usage

Input data

In order to train a model, evaluate a model or predict the quality of an MLC result you have to provide data.

This can be a tabular-separated file (tsv) in the format (tabular is marked with \t)

document-content\tpredicted_labels_with_scores\ttrue_labels

where

  • document-content is a string describing the content of the document (more precisely: the string on which the MLC method is trained), e.g. the title
  • predicted_labels_with_scores is a comma-separated list of pairs predicted_label:confidence-score (this is basically the output of the MLC method)
  • true_labels is a comma-separated list of true labels (ground truth)

Note that you can omit the true_labels section if you only want to predict the quality of the MLC result.

For example, a row in the data file could look like this:

Optimal investment policy of the regulated firm\tConcept0:0.5,Concept1:1\tConcept0,Concept3

For those who use an MLC method via the toolkit Annif for automated subject indexing: You can alternatively specify a full-text document corpus combined with the result of the Annif index method (tested with Annif version 0.59) applied on the corpus. This is a folder with three files per document:

  • doc.annif : result of Annif index method
  • doc.tsv : ground truth
  • doc.txt : document content

As above, you may omit the doc.tsv if you just want to predict the quality of the MLC result.

Train

To train a model, use the train mode, e.g.:

qualle train /path/to/train_data_file /path/to/output/model

It is also possible to use label calibration (comparison of predicted vs actual labels) using the subthesauri of a thesaurus (such as the STW) as categories (please read the paper for more explanations). Consult the help (see above) for the required options.

Evaluate

You must provide test data and the path to a trained model in order to evaluate that model. Metrics such as the explained variation are printed out, describing the quality of the recall prediction (please consult the paper for more information).

REST interface

To perform the prediction on a MLC result, a REST interface can be started. uvicorn is used as HTTP server. You can also use any ASGI server implementation and create the ASGI app directly with the method qualle.interface.rest.create_app. You need to provide the environment variable MODEL_FILE with the path to the model (see qualle.interface.config.RESTSettings).

The REST endpoint expects a HTTP POST with the result of a MLC for a list of documents as body. The format is JSON as specified in qualle/openapi.json. You can also use the Swagger UI accessible at http://address_of_server/docs to play around a bit.

Contribute

Contributions via pull requests are welcome. Please create an issue beforehand to explain and discuss the reasons for the respective contribution.

qualle code should follow the Black style. The Black tool is included as a development dependency; you can run black . in the project root to autoformat code.

References

[1] Toepfer, Martin, and Christin Seifert. "Content-based quality estimation for automatic subject indexing of short texts under precision and recall constraints." International Conference on Theory and Practice of Digital Libraries. Springer, Cham, 2018., DOI 10.1007/978-3-030-00066-0_1

Context information

This code was created as part of the subject indexing automatization effort at ZBW - Leibniz Information Centre for Economics. See our homepage for more information, publications, and contact details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

qualle-0.3.1.tar.gz (25.4 kB view details)

Uploaded Source

Built Distribution

qualle-0.3.1-py3-none-any.whl (41.4 kB view details)

Uploaded Python 3

File details

Details for the file qualle-0.3.1.tar.gz.

File metadata

  • Download URL: qualle-0.3.1.tar.gz
  • Upload date:
  • Size: 25.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for qualle-0.3.1.tar.gz
Algorithm Hash digest
SHA256 9482477795f8fea3b8288fd6fe8db7b82cc421f3d072978f25ce722282786a83
MD5 4243fe8c71bd612f900adee869612a9b
BLAKE2b-256 57ba32bb88ddf068b56037baccfb256e25ebfc5a689fbf1fe2a3a5868783620c

See more details on using hashes here.

File details

Details for the file qualle-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: qualle-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 41.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for qualle-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b3f4d4b5f5128ca4375c12e3a80a512dd4a86c682f55e441150c5cddedf8eccf
MD5 1296f7dc8094c18831a918489f28c60a
BLAKE2b-256 675a86e900527f0aa5680116e7863aabc68939b24bf30e68a7f567b4ea0e7729

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page