Skip to main content

A Python toolkit for representation-based learning/classification algorithms

Project description

reppi

A Python library for representation learning — sparse coding and dictionary learning algorithms implemented close to their original formulations.

Installation

pip install reppi

Algorithms

Algorithm Class Reference
Orthogonal Matching Pursuit OMP Elad et al., 2008
K-SVD KSVD Aharon et al., 2006
Label Consistent K-SVD LCKSVD Jiang et al., 2011

Convention

reppi follows the column-major convention common in the sparse representation literature: signals are columns, so data matrices are shaped (n_features, n_samples). This matches the MATLAB toolboxes the implementations are based on. If your data is in sklearn's (n_samples, n_features) layout, transpose before passing it in.

Quick Start

Installaion

pip install reppi

Sparse Coding with OMP

from reppi import OMP
import numpy as np

# D: (n_features, n_atoms), unit-norm columns
# X: (n_features, n_samples)
omp = OMP(n_nonzero_coefs=10)
Gamma = omp.encode(X, D)  # (n_atoms, n_samples)

Dictionary Learning with K-SVD

from reppi import KSVD

ksvd = KSVD(
    n_components=128,     # number of atoms
    n_nonzero_coefs=10,   # sparsity level T
    n_iter=20,
    verbose=True,
)
ksvd.fit(X_train)         # X_train: (n_features, n_samples)

D = ksvd.D_               # learned dictionary (n_features, n_components)
Gamma = ksvd.transform(X) # sparse codes (n_components, n_samples)

Discriminative Dictionary Learning with LC-KSVD

LC-KSVD jointly learns a dictionary and a linear classifier from labelled data. Labels are passed as a one-hot matrix H of shape (n_classes, n_samples).

LC-KSVD1 — reconstruction + label-consistency:

from reppi import LCKSVD

model = LCKSVD(
    n_components=570,
    n_nonzero_coefs=30,
    alpha=4.0,            # weight for label-consistency term
    variant="lcksvd1",
    n_iter=50,
    n_iter_init=20,       # K-SVD iterations for initialisation
    verbose=True,
)
model.fit(X_train, H_train)
Gamma = model.transform(X_test)  # (n_components, n_samples)

LC-KSVD2 — reconstruction + label-consistency + classification error:

model = LCKSVD(
    n_components=570,
    n_nonzero_coefs=30,
    alpha=4.0,
    beta=2.0,             # weight for classifier term
    variant="lcksvd2",
    n_iter=50,
    n_iter_init=20,
    verbose=True,
)
model.fit(X_train, H_train)

predictions = model.predict(X_test)        # integer class indices
accuracy    = model.score(X_test, H_test)  # float in [0, 1]

Supplying Your Own Initialisation

If you already have a pre-trained dictionary or want to control initialisation:

from reppi.dictionary.lc_ksvd import initialization4lcksvd

D_init, A_init, W_init, Q = initialization4lcksvd(
    X_train, H_train,
    n_components=570,
    n_iter_init=20,
    n_nonzero_coefs=30,
)

model.fit(X_train, H_train, D_init=D_init, A_init=A_init, W_init=W_init, Q=Q)

API Reference

OMP

OMP(n_nonzero_coefs, mode='batch', check_dict=True)
Parameter Description
n_nonzero_coefs Maximum non-zeros per signal (sparsity T)
mode 'batch' (Batch-OMP, fast) or 'cholesky' (single-signal, low memory)
check_dict Verify unit-norm columns before encoding

Methods: encode(X, D, G=None) → Gamma

KSVD

KSVD(n_components, n_nonzero_coefs, n_iter=10, exact_svd=False,
     mu_thresh=0.99, mem_usage='normal', random_state=None, verbose=False)
Parameter Description
n_components Number of dictionary atoms
n_nonzero_coefs Sparsity level T
n_iter Training iterations
exact_svd Use full SVD in atom update (slower, slightly better)
mu_thresh Mutual-incoherence threshold; atoms with correlation above this are replaced (default 0.99)
mem_usage 'high' / 'normal' / 'low' — controls whether G=D'D and DtX are precomputed

Methods: fit(X, D_init=None), transform(X), fit_transform(X)
Attributes: D_, errors_

LCKSVD

LCKSVD(n_components, n_nonzero_coefs, alpha=4.0, beta=2.0,
       variant='lcksvd2', n_iter=50, n_iter_init=20, exact_svd=False,
       mu_thresh=0.99, random_state=None, verbose=False)
Parameter Description
alpha Weight for label-consistency term (√α in the paper)
beta Weight for classifier term (√β); LC-KSVD2 only
variant 'lcksvd1' or 'lcksvd2'
n_iter_init K-SVD iterations for the warm-start initialisation phase

Methods: fit(X, H, ...), transform(X), predict(X), score(X, H)
Attributes: D_, W_, A_, errors_

References

  • M. Aharon, M. Elad, A. Bruckstein. "The K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation". IEEE Trans. Signal Processing, 54(11), 2006.
  • M. Elad, R. Rubinstein, M. Zibulevsky. "Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit". Technion Technical Report, 2008.
  • Z. Jiang, Z. Lin, L. Davis. "Learning A Discriminative Dictionary for Sparse Coding via Label Consistent K-SVD". CVPR, 2011.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reppi-0.1.2.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reppi-0.1.2-py3-none-any.whl (23.9 kB view details)

Uploaded Python 3

File details

Details for the file reppi-0.1.2.tar.gz.

File metadata

  • Download URL: reppi-0.1.2.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for reppi-0.1.2.tar.gz
Algorithm Hash digest
SHA256 f921b470dd9e329ede29dbf0a739bd57e951039040fcc7db2161be2cc8081537
MD5 abf4535b594e01baca1877fe9ed96b1f
BLAKE2b-256 ec25cc9cf1c8a452c519ffa65f3765e18c2b1e3cc67181382d3281c3fb5759ca

See more details on using hashes here.

Provenance

The following attestation bundles were made for reppi-0.1.2.tar.gz:

Publisher: publish.yml on ckekula/reppi

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file reppi-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: reppi-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 23.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for reppi-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4da13d14b215fc6b51501d2872f3fb7d569bab3e28ac88231b103a1b631d52f4
MD5 1ae737496684d097a0196bd691d1ded7
BLAKE2b-256 1f38e919d02a9b6f878931e6d81bb95883dd0f86d8bf3ffcf2a3e7e711af4eb3

See more details on using hashes here.

Provenance

The following attestation bundles were made for reppi-0.1.2-py3-none-any.whl:

Publisher: publish.yml on ckekula/reppi

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page