Skip to main content

Kernel FDA implementation described in https://arxiv.org/abs/1906.09436

Project description

Kernel FDA

PyPI version

This repository implements Kernel Fisher Discriminant Analysis (Kernel FDA) as described in https://arxiv.org/abs/1906.09436. FDA, equivalent to Linear Discriminant Analysis (LDA), is a classification method that projects vectors onto a smaller subspace. This subspace is optimized to maximize between-class scatter and minimize within class scatter, making it an effective classification method. Kernel FDA improves on regular FDA by enabling nonlinear subspaces using the kernel trick.

FDA and Kernel FDA classify vectors by comparing their projection in the fisher subspace to class centroids, adding a new class is just a matter of adding a new centroid. Thus, this model is implemented here with the hope of using Kernel FDA as a oneshot learning algorithm.

Usage

Kfda uses scikit-learn's interface.

  • Initializing: cls = Kfda(n_components=2, kernel='linear') for a classifier that a linear kernel with 2 components. For kernel of degree 2, use Kfda(n_components=2, kernel='poly', degree=2) for a polynomial kernel of degree 2. See https://scikit-learn.org/stable/modules/metrics.html#polynomial-kernel for a list of kernels and their parameters, or the source code docstrings for a complete description of the parameters.

  • Fitting: cls.fit(X, y)

  • Prediction: cls.predict(X)

  • Scoring: cls.score(X, y)

Examples

See examples for examples on MNIST, faces, and oneshot learning.

After running them, you can plug corresponding pairs of generated *embeddings.tsv and *labels.tsv into Tensorflow's Embedding Projector to visualize the embeddings. For example, running mnist.py and then loading mnist_test_embeddings.tsv and mnist_test_labels.tsv shows the following using the UMAP visualizer:

MNIST Kernel FDA embeddings

Notebook

Another place to see example usage is the Colab Notebook.

Caveats

Similar to SVM, the most glaring constraint of KFDA is the memory limit in training. Training a Kernel FDA classifier requires creating matrices that are n_samples by n_samples large, meaning the memory requirement grows with respect to O(n_samples^2).

The accuracy, while high (0.97 on MNIST), seems to be limited by the training set size. With a training size of 10000 and a testing size of 60000, performance on MNIST averages around 0.97 accuracy using 9 fisher directions and the RBF kernel:

cls = Kfda(kernel='rbf', n_components=9)

This may be due to the constrained training size. Accuracy can be improved without increasing training size by implementing invariant kernels that would implicitly handle scale and rotation without requiring an extended dataset.

Oneshot Learning

Oneshot learning means that an algorithm can learn a new class with as little as one sample. This is possible for Kernel FDA because it finds a subspace that purposefully spreads out distinct classes. Introducing a new label involves simply adding another centroid for use in prediction. See the Colab Notebook. or the example for examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kfda-0.1.1.tar.gz (5.3 kB view hashes)

Uploaded Source

Built Distribution

kfda-0.1.1-py3-none-any.whl (6.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page