Skip to main content

Label distribution learning (LDL) and label enhancement (LE) toolkit implemented in python.

Project description

PyLDL

Label distribution learning (LDL) and label enhancement (LE) toolkit implemented in python, including:

$^1$ Technically, these methods are only suitable for totally ordered labels.

$^2$ These are algorithms for incomplete LDL, so you should use pyldl.utils.random_missing to generate the missing label distribution matrix and the corresponding mask matrix in the experiments.

$^3$ These are LDL classifiers, so you should use predict_proba to get label distributions and predict to get predicted labels.

$^4$ These are oversampling algorithms for LDL, therefore you should use fit_transform to generate synthetic samples.

Installation

PyLDL is now available on PyPI. Use the following command to install.

pip install python-ldl

To install the newest version, you can clone this repo and run the setup.py file.

python setup.py install

Usage

Here is an example of using PyLDL.

from pyldl.utils import load_dataset
from pyldl.algorithms import SA_BFGS
from pyldl.metrics import score

from sklearn.model_selection import train_test_split

dataset_name = 'SJAFFE'
X, y = load_dataset(dataset_name)
X_train, X_test, y_train, y_test = train_test_split(X, y)

model = SA_BFGS()
model.fit(X_train, y_train)

y_pred = model.predict(X_test)
print(score(y_test, y_pred))

For those who would like to use the original implementation:

  1. Install MATLAB.
  2. Install MATLAB engine for python.
  3. Download LDL Package here.
  4. Get the package directory of PyLDL (...\Lib\site-packages\pyldl).
  5. Place the LDLPackage_v1.2 folder into the matlab_algorithms folder.

Now, you can load the original implementation of the method, e.g.:

from pyldl.matlab_algorithms import SA_IIS

You can visualize the performance of any model on the artificial dataset (Geng 2016) with the pyldl.utils.plot_artificial function, e.g.:

from pyldl.algorithms import LDSVR, SA_BFGS, SA_IIS, AA_KNN, PT_Bayes, GLLE, LIBLE
from pyldl.utils import plot_artificial

methods = ['LDSVR', 'SA_BFGS', 'SA_IIS', 'AA_KNN', 'PT_Bayes', 'GLLE', 'LIBLE']

plot_artificial(model=None, figname='GT')
for i in methods:
    plot_artificial(model=eval(f'{i}()'), figname=i)

The output images are as follows.

(Ground Truth) LDSVR
SA_BFGS SA_IIS
AA_KNN PT_Bayes
GLLE LIBLE

Enjoy! :)

Experiments

For each algorithm, a ten-fold cross validation is performed, repeated 10 times with s-JAFFE dataset and the average metrics are recorded. Therefore, the results do not fully describe the performance of the model.

Results of ours are as follows.

Algorithm Cheby.(↓) Clark(↓) Can.(↓) K-L(↓) Cos.(↑) Int.(↑)
SA-BFGS .092 ± .010 .361 ± .029 .735 ± .060 .051 ± .009 .954 ± .009 .878 ± .011
SA-IIS .100 ± .009 .361 ± .023 .746 ± .050 .051 ± .008 .952 ± .007 .873 ± .009
AA-kNN .098 ± .011 .349 ± .029 .716 ± .062 .053 ± .010 .950 ± .009 .877 ± .011
AA-BP .120 ± .012 .426 ± .025 .889 ± .057 .073 ± .010 .931 ± .010 .848 ± .011
PT-Bayes .116 ± .011 .425 ± .031 .874 ± .064 .073 ± .012 .932 ± .011 .850 ± .012
PT-SVM .117 ± .012 .422 ± .027 .875 ± .057 .072 ± .011 .932 ± .011 .850 ± .011

Results of the original MATLAB implementation (Geng 2016) are as follows.

Algorithm Cheby.(↓) Clark(↓) Can.(↓) K-L(↓) Cos.(↑) Int.(↑)
SA-BFGS .107 ± .015 .399 ± .044 .820 ± .103 .064 ± .016 .940 ± .015 .860 ± .019
SA-IIS .117 ± .015 .419 ± .034 .875 ± .086 .070 ± .012 .934 ± .012 .851 ± .016
AA-kNN .114 ± .017 .410 ± .050 .843 ± .113 .071 ± .023 .934 ± .018 .855 ± .021
AA-BP .130 ± .017 .510 ± .054 1.05 ± .124 .113 ± .030 .908 ± .019 .824 ± .022
PT-Bayes .121 ± .016 .430 ± .035 .904 ± .086 .074 ± .014 .930 ± .016 .846 ± .016
PT-SVM .127 ± .017 .457 ± .039 .935 ± .074 .086 ± .016 .920 ± .014 .839 ± .015

Requirements

matplotlib>=3.6.1
numpy>=1.22.3
qpsolvers>=4.0.0
quadprog>=0.1.11
scikit-fuzzy>=0.4.2
scikit-learn>=1.0.2
scipy>=1.8.0
tensorflow>=2.8.0
tensorflow-probability>=0.16.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_ldl-0.0.3.tar.gz (32.7 kB view details)

Uploaded Source

Built Distribution

python_ldl-0.0.3-py3-none-any.whl (41.4 kB view details)

Uploaded Python 3

File details

Details for the file python_ldl-0.0.3.tar.gz.

File metadata

  • Download URL: python_ldl-0.0.3.tar.gz
  • Upload date:
  • Size: 32.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for python_ldl-0.0.3.tar.gz
Algorithm Hash digest
SHA256 77e67a867f41aa8723679599c7e75278a2100b663b19e92ba46ba15d05b06ec0
MD5 2280fa1881f02c0c204bbed7d83a0736
BLAKE2b-256 2677376bf3235b1cf99da15208237cb1ef940984c60ecfba91e6928dc920e74d

See more details on using hashes here.

File details

Details for the file python_ldl-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: python_ldl-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 41.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for python_ldl-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 2e4a92d285685502a20a98df1bd26d70f522ba31c5ac6f5e2e95f945280430c6
MD5 7ed6dc2e39f27cbe38bae80c9c854ded
BLAKE2b-256 540c8fed9c22d5f2cad73ed292c53c2c2de8ca5257e3a8493877de70f71409d9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page