LibAUC: A Machine Learning Library to directly Optimize AUC (AUROC, AUPRC)
Project description
LibAUC
An end-to-end machine learning library for AUC optimization (AUROC, AUPRC).
Why LibAUC?
Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.
- In many domains, AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model performance.
- Many real-world datasets are usually imbalanced. AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data
Links
- Official Website: https://libauc.org
- Release Notes: https://github.com/Optimization-AI/LibAUC/releases
- Repository: https://github.com/Optimization-AI/LibAUC/
Installation
$ pip install libauc
Usage
Quickstart for Beginners:
Optimizing AUROC (Area Under the Receiver Operating Characteristic)
>>> #import library
>>> from libauc.losses import AUCMLoss
>>> from libauc.optimizers import PESG
...
>>> #define loss
>>> Loss = AUCMLoss()
>>> optimizer = PESG()
...
>>> #training
>>> model.train()
>>> for data, targets in trainloader:
>>> data, targets = data.cuda(), targets.cuda()
preds = model(data)
loss = Loss(preds, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
...
>>> #restart stage
>>> optimizer.update_regularizer()
Optimizing AUPRC (Area Under the Precision-Recall Curve)
>>> #import library
>>> from libauc.losses import APLoss_SH
>>> from libauc.optimizers import SOAP_SGD, SOAP_ADAM
...
>>> #define loss
>>> Loss = APLoss_SH()
>>> optimizer = SOAP_ADAM()
...
>>> #training
>>> model.train()
>>> for index, data, targets in trainloader:
>>> data, targets = data.cuda(), targets.cuda()
preds = model(data)
loss = Loss(preds, targets, index)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Please visit our website or github for more examples.
Citation
If you find LibAUC useful in your work, please cite the following paper:
@inproceedings{yuan2021robust,
title={Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
year={2021}
}
Contact
If you have any questions, please contact us @ Zhuoning Yuan [yzhuoning@gmail.com] and Tianbao Yang [tianbao-yang@uiowa.edu] or please open a new issue in the Github.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for libauc-1.1.9rc1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3656f1c00112e0fbdca57bd31c5bd68bf2fc3240da0e1a1a097658c82a8ee8cf |
|
MD5 | d5a7301899004a389675873d87317409 |
|
BLAKE2b-256 | f4abde0070f4ca5f5afa9bc3db4ad31235310afcaa1ef6830c6a66cdef34930c |