Label distribution learning (LDL) and label enhancement (LE) toolkit implemented in python.
Project description
PyLDL
Label distribution learning (LDL) and label enhancement (LE) toolkit implemented in python, including:
- LDL algorithms:
- (Geng, Yin, and Zhou 2013)[TPAMI]:
CPNN
$^1$. - (Geng and Hou 2015)[IJCAI]:
LDSVR
. - ⭐(Geng 2016)[TKDE]:
SA_BFGS
,SA_IIS
,AA_KNN
,AA_BP
,PT_Bayes
, andPT_SVM
. - (Yang, Sun, and Sun 2017)[AAAI]:
BCPNN
andACPNN
. - (Xu and Zhou 2017)[IJCAI]:
IncomLDL
$^2$. - (Shen et al. 2017)[NeurIPS]:
LDLF
. - (Wang and Geng 2019)[IJCAI]:
LDL4C
$^3$. - (Shen et al. 2020)[南京理工大学学报 (Chinese)]:
AdaBoostLDL
. - (González et al. 2021a)[Inf. Sci.]:
SSG_LDL
$^4$. - (González et al. 2021b)[Inf. Fusion]:
DF_LDL
. - (Wang and Geng 2021a)[IJCAI]:
LDL_HR
$^3$. - (Wang and Geng 2021b)[ICML]:
LDLM
$^3$. - (Jia et al. 2021)[TKDE]:
LDL_SCL
. - (Jia et al. 2023a)[TKDE]:
LDL_LRR
. - (Jia et al. 2023b)[TNNLS]:
LDL_DPA
. - (Wen et al. 2023)[ICCV]:
CAD
$^1$,QFD2
$^1$, andCJS
$^1$.
- (Geng, Yin, and Zhou 2013)[TPAMI]:
- LE algorithms:
- (Xu, Liu, and Geng 2019)[TKDE]:
FCM
,KM
,LP
,ML
, andGLLE
. - (Xu et al. 2020)[ICML]:
LEVI
. - (Zheng, Zhu, and Tang 2023)[CVPR]:
LIBLE
.
- (Xu, Liu, and Geng 2019)[TKDE]:
- LDL metrics:
chebyshev
,clark
,canberra
,kl_divergence
,cosine
,intersection
, etc. - Structured LDL datasets: Human_Gene, Movie, Natural_Scene, s-BU_3DFE, s-JAFFE, Yeast, etc.
- LDL applications:
- Facial emotion recognition (supported datasets: JAFFE)
- (Shirani et al. 2019)[ACL]: Emphasis selection (supported datasets: SemEval2020; pre-trained GloVe embeddings can be downloaded here).
- (Wu et al. 2019)[ICCV]: Lesion counting (supported datasets: ACNE04).
- (Chen et al. 2020)[CVPR]: Facial emotion recognition with auxiliary label space graphs (supported datasets: CK+; OpenFace can be downloaded here, and the required models can be downloaded here).
$^1$ Technically, these methods are only suitable for totally ordered labels.
$^2$ These are algorithms for incomplete LDL, so you should use
pyldl.utils.random_missing
to generate the missing label distribution matrix and the corresponding mask matrix in the experiments.$^3$ These are LDL classifiers, so you should use
predict_proba
to get label distributions andpredict
to get predicted labels.$^4$ These are oversampling algorithms for LDL, therefore you should use
fit_transform
to generate synthetic samples.
Installation
PyLDL is now available on PyPI. Use the following command to install.
pip install python-ldl
To install the newest version, you can clone this repo and run the setup.py
file.
python setup.py install
Usage
Here is an example of using PyLDL.
from pyldl.utils import load_dataset
from pyldl.algorithms import SA_BFGS
from pyldl.metrics import score
from sklearn.model_selection import train_test_split
dataset_name = 'SJAFFE'
X, y = load_dataset(dataset_name)
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = SA_BFGS()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(score(y_test, y_pred))
For those who would like to use the original implementation:
- Install MATLAB.
- Install MATLAB engine for python.
- Download LDL Package here.
- Get the package directory of PyLDL (...\Lib\site-packages\pyldl).
- Place the LDLPackage_v1.2 folder into the matlab_algorithms folder.
Now, you can load the original implementation of the method, e.g.:
from pyldl.matlab_algorithms import SA_IIS
You can visualize the performance of any model on the artificial dataset (Geng 2016) with the pyldl.utils.plot_artificial
function, e.g.:
from pyldl.algorithms import LDSVR, SA_BFGS, SA_IIS, AA_KNN, PT_Bayes, GLLE, LIBLE
from pyldl.utils import plot_artificial
methods = ['LDSVR', 'SA_BFGS', 'SA_IIS', 'AA_KNN', 'PT_Bayes', 'GLLE', 'LIBLE']
plot_artificial(model=None, figname='GT')
for i in methods:
plot_artificial(model=eval(f'{i}()'), figname=i)
The output images are as follows.
(Ground Truth) | LDSVR |
SA_BFGS |
SA_IIS |
AA_KNN |
PT_Bayes |
GLLE |
LIBLE |
Enjoy! :)
Experiments
For each algorithm, a ten-fold cross validation is performed, repeated 10 times with s-JAFFE dataset and the average metrics are recorded. Therefore, the results do not fully describe the performance of the model.
Results of ours are as follows.
Algorithm | Cheby.(↓) | Clark(↓) | Can.(↓) | K-L(↓) | Cos.(↑) | Int.(↑) |
---|---|---|---|---|---|---|
SA-BFGS | .092 ± .010 | .361 ± .029 | .735 ± .060 | .051 ± .009 | .954 ± .009 | .878 ± .011 |
SA-IIS | .100 ± .009 | .361 ± .023 | .746 ± .050 | .051 ± .008 | .952 ± .007 | .873 ± .009 |
AA-kNN | .098 ± .011 | .349 ± .029 | .716 ± .062 | .053 ± .010 | .950 ± .009 | .877 ± .011 |
AA-BP | .120 ± .012 | .426 ± .025 | .889 ± .057 | .073 ± .010 | .931 ± .010 | .848 ± .011 |
PT-Bayes | .116 ± .011 | .425 ± .031 | .874 ± .064 | .073 ± .012 | .932 ± .011 | .850 ± .012 |
PT-SVM | .117 ± .012 | .422 ± .027 | .875 ± .057 | .072 ± .011 | .932 ± .011 | .850 ± .011 |
Results of the original MATLAB implementation (Geng 2016) are as follows.
Algorithm | Cheby.(↓) | Clark(↓) | Can.(↓) | K-L(↓) | Cos.(↑) | Int.(↑) |
---|---|---|---|---|---|---|
SA-BFGS | .107 ± .015 | .399 ± .044 | .820 ± .103 | .064 ± .016 | .940 ± .015 | .860 ± .019 |
SA-IIS | .117 ± .015 | .419 ± .034 | .875 ± .086 | .070 ± .012 | .934 ± .012 | .851 ± .016 |
AA-kNN | .114 ± .017 | .410 ± .050 | .843 ± .113 | .071 ± .023 | .934 ± .018 | .855 ± .021 |
AA-BP | .130 ± .017 | .510 ± .054 | 1.05 ± .124 | .113 ± .030 | .908 ± .019 | .824 ± .022 |
PT-Bayes | .121 ± .016 | .430 ± .035 | .904 ± .086 | .074 ± .014 | .930 ± .016 | .846 ± .016 |
PT-SVM | .127 ± .017 | .457 ± .039 | .935 ± .074 | .086 ± .016 | .920 ± .014 | .839 ± .015 |
Requirements
matplotlib>=3.6.1
numpy>=1.22.3
qpsolvers>=4.0.0
quadprog>=0.1.11
scikit-fuzzy>=0.4.2
scikit-learn>=1.0.2
scipy>=1.8.0
tensorflow>=2.8.0
tensorflow-probability>=0.16.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file python_ldl-0.0.3.tar.gz
.
File metadata
- Download URL: python_ldl-0.0.3.tar.gz
- Upload date:
- Size: 32.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 77e67a867f41aa8723679599c7e75278a2100b663b19e92ba46ba15d05b06ec0 |
|
MD5 | 2280fa1881f02c0c204bbed7d83a0736 |
|
BLAKE2b-256 | 2677376bf3235b1cf99da15208237cb1ef940984c60ecfba91e6928dc920e74d |
File details
Details for the file python_ldl-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: python_ldl-0.0.3-py3-none-any.whl
- Upload date:
- Size: 41.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.9.19
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2e4a92d285685502a20a98df1bd26d70f522ba31c5ac6f5e2e95f945280430c6 |
|
MD5 | 7ed6dc2e39f27cbe38bae80c9c854ded |
|
BLAKE2b-256 | 540c8fed9c22d5f2cad73ed292c53c2c2de8ca5257e3a8493877de70f71409d9 |