Skip to main content

QuaPy: a framework for Quantification in Python

Project description

QuaPy

QuaPy is an open source framework for quantification (a.k.a. supervised prevalence estimation, or learning to quantify) written in Python.

QuaPy is based on the concept of "data sample", and provides implementations of the most important aspects of the quantification workflow, such as (baseline and advanced) quantification methods, quantification-oriented model selection mechanisms, evaluation measures, and evaluations protocols used for evaluating quantification methods. QuaPy also makes available commonly used datasets, and offers visualization tools for facilitating the analysis and interpretation of the experimental results.

Last updates:

  • Version 0.1.9 is released! major changes can be consulted here.
  • The developer API documentation is available here

Installation

pip install quapy

Cite QuaPy

If you find QuaPy useful (and we hope you will), please consider citing the original paper in your research:

@inproceedings{moreo2021quapy,
  title={QuaPy: a python-based framework for quantification},
  author={Moreo, Alejandro and Esuli, Andrea and Sebastiani, Fabrizio},
  booktitle={Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
  pages={4534--4543},
  year={2021}
}

A quick example:

The following script fetches a dataset of tweets, trains, applies, and evaluates a quantifier based on the Adjusted Classify & Count quantification method, using, as the evaluation measure, the Mean Absolute Error (MAE) between the predicted and the true class prevalence values of the test set.

import quapy as qp

dataset = qp.datasets.fetch_UCIBinaryDataset("yeast")
training, test = dataset.train_test

# create an "Adjusted Classify & Count" quantifier
model = qp.method.aggregative.ACC()
model.fit(training)

estim_prevalence = model.quantify(test.X)
true_prevalence  = test.prevalence()

error = qp.error.mae(true_prevalence, estim_prevalence)
print(f'Mean Absolute Error (MAE)={error:.3f}')

Quantification is useful in scenarios characterized by prior probability shift. In other words, we would be little interested in estimating the class prevalence values of the test set if we could assume the IID assumption to hold, as this prevalence would be roughly equivalent to the class prevalence of the training set. For this reason, any quantification model should be tested across many samples, even ones characterized by class prevalence values different or very different from those found in the training set. QuaPy implements sampling procedures and evaluation protocols that automate this workflow. See the documentation and the examples directory for detailed examples.

Features

  • Implementation of many popular quantification methods (Classify-&-Count and its variants, Expectation Maximization, quantification methods based on structured output learning, HDy, QuaNet, quantification ensembles, among others).
  • Versatile functionality for performing evaluation based on sampling generation protocols (e.g., APP, NPP, etc.).
  • Implementation of most commonly used evaluation metrics (e.g., AE, RAE, NAE, NRAE, SE, KLD, NKLD, etc.).
  • Datasets frequently used in quantification (textual and numeric), including:
    • 32 UCI Machine Learning datasets.
    • 11 Twitter quantification-by-sentiment datasets.
    • 3 product reviews quantification-by-sentiment datasets.
    • 4 tasks from LeQua competition (new in v0.1.7!)
  • Native support for binary and single-label multiclass quantification scenarios.
  • Model selection functionality that minimizes quantification-oriented loss functions.
  • Visualization tools for analysing the experimental results.

Requirements

  • scikit-learn, numpy, scipy
  • pytorch (for QuaNet)
  • svmperf patched for quantification (see below)
  • joblib
  • tqdm
  • pandas, xlrd
  • matplotlib

Contributing

In case you want to contribute improvements to quapy, please generate pull request to the "devel" branch.

Documentation

The developer API documentation is available here.

Check out our Wiki, in which many examples are provided:

Acknowledgments:

SoBigData++

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

quapy-0.1.9.tar.gz (116.1 kB view details)

Uploaded Source

Built Distribution

QuaPy-0.1.9-py3-none-any.whl (130.8 kB view details)

Uploaded Python 3

File details

Details for the file quapy-0.1.9.tar.gz.

File metadata

  • Download URL: quapy-0.1.9.tar.gz
  • Upload date:
  • Size: 116.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.5

File hashes

Hashes for quapy-0.1.9.tar.gz
Algorithm Hash digest
SHA256 6d218ac5bcd1025dfa2d572f03695d08bb7626c791c290dcdeab7d1eb97c1f67
MD5 2c883604c6bbf57a199c47d51f0c9a8d
BLAKE2b-256 9eef95811b621571f3a2a25c6dd31e25f0db4db62c58a4e7236a0d7f6f9eda9a

See more details on using hashes here.

File details

Details for the file QuaPy-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: QuaPy-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 130.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.12.5

File hashes

Hashes for QuaPy-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 7afd8e1ff8a0e9c5f34fb2ad9be3fd2ba47d42228ff590aba9fc2dbb595bd316
MD5 b52278a62bc621c2f7d4d79e40f30014
BLAKE2b-256 ff9d209978b82538ef29ebef5f0ee17484c3ee15f093e8d9264840255eaac887

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page