A small library for Consistent Optimization of Label-wise Utilities in Multi-label clasifficatioN
Project description
x Consistent Optimization of Label-wise Utilities in Multi-label classificatioN s
xCOLUMNs is a small Python library that aims to implement different methods for the optimization of a general family of metrics that can be defined on multi-label classification matrices. These include, but are not limited to, label-wise metrics. The library provides an efficient implementation of the different optimization methods that easily scale to the extreme multi-label classification (XMLC) - problems with a very large number of labels and instances.
All the methods operate on conditional probability estimates of the labels, which are the output of the multi-label classification models. Based on these estimates, the methods aim to find the optimal prediction for a given test set or to find the optimal population classifier as a plug-in rule on top of the conditional probability estimator. This makes the library very flexible and allows to use it with any multi-label classification model that provides conditional probability estimates. The library directly supports numpy arrays, PyTorch tensors, and sparse CSR matrices from scipy as input/output data types.
For more details, please see our short usage guide, the documentation, and/or the papers that describe the methods implemented in the library.
Quick start
Installation
The library can be installed using pip:
pip install xcolumns
It should work on all major platforms (Linux, macOS, Windows) and with Python 3.8+.
Usage
We provide a short usage guide for the library in short_usage_guide.ipynb notebook. You can also check the documentation for more details.
Methods, usage, and how to cite
The library implements the following methods:
Instance-wise weighted prediction
The library implements a set of methods for instance-wise weighted prediction, that include optimal prediction strategies for different metrics, such as:
- Precision at k
- Propensity-scored precision at k
- Macro-averaged recall at k
- Macro-averaged balanced accuracy at k
- and others ...
Optimization of prediction for a given test set using Block Coordinate Ascent/Descent (BCA/BCD)
The method aims to optimize the prediction for a given test set using the block coordinate ascent/descent algorithm.
The method was first introduced and described in the paper:
Finding optimal population classifier via Frank-Wolfe (FW)
The method was first introduced and described in the paper:
Repository structure
The repository is organized as follows:
docs/
- Sphinx documentation (work in progress)experiments/
- a code for reproducing experiments from the papers, see the README.md file in the directory for detailsxcolumns/
- Python package with the librarytests/
- tests for the library (the coverage is bit limited at the moment, but these test should guarantee that the main components of the library works as expected)
Development and contributing
The library was created as a part of our research projects. We are happy to share it with the community and we hope that someone will find it useful. If you have any questions or suggestions or if you found a bug, please open an issue. We are also happy to accept contributions in the form of pull requests.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file xcolumns-0.0.2.tar.gz
.
File metadata
- Download URL: xcolumns-0.0.2.tar.gz
- Upload date:
- Size: 31.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 52985ba3ac946a044043f69aa83068e7ae212871b5eaaae56124c35d2b7e9281 |
|
MD5 | dc181187b68eabe81d1abf8acdb5dd58 |
|
BLAKE2b-256 | cf0169c473153bbcd3aaae375b8e57b0035afe3b17e1ba65462faff5377a2304 |