Skip to main content

A machine learning interface for isolated sequence classification algorithms in Python.

Project description

Sequentia

A machine learning interface for isolated sequence classification algorithms in Python.

Introduction

Sequential data is often observed in many different forms such as audio signals, stock prices, and even brain & heart signals. Such data is of particular interest in machine learning, as changing patterns over time naturally provide many interesting opportunities and challenges for classification.

Sequentia is a Python package that implements various classification algorithms for sequential data.

Some examples of how Sequentia can be used in isolated sequence classification include:

  • determining a spoken word based on its audio signal or some other representation such as MFCCs,
  • identifying potential heart conditions such as arrhythmia from ECG signals,
  • predicting motion intent for gesture control from electrical muscle activity,
  • classifying hand-written characters according to their pen-tip trajectories,
  • classifying hand or head gestures from rotation or movement signals,
  • classifying the sentiment of a phrase or sentence in natural language from word embeddings.

Build status

master dev
Travis Build (Master) Travis Build (Development)

Features

Sequentia provides the following algorithms, all supporting multivariate sequences with different durations.

Classification algorithms

  • <input type="checkbox" checked="" disabled="" /> Hidden Markov Models (via hmmlearn)
    Learning with the Baum-Welch algorithm [1]
    • <input type="checkbox" checked="" disabled="" /> Gaussian Mixture Model emissions
    • <input type="checkbox" checked="" disabled="" /> Linear, left-right and ergodic topologies
    • <input type="checkbox" checked="" disabled="" /> Multi-processed predictions
  • <input type="checkbox" checked="" disabled="" /> Dynamic Time Warping k-Nearest Neighbors (via dtaidistance)
    • <input type="checkbox" checked="" disabled="" /> Sakoe–Chiba band global warping constraint
    • <input type="checkbox" checked="" disabled="" /> Dependent and independent feature warping (DTWD & DTWI)
    • <input type="checkbox" checked="" disabled="" /> Custom distance-weighted predictions
    • <input type="checkbox" checked="" disabled="" /> Multi-processed predictions
  • <input type="checkbox" checked="" disabled="" /> DeepGRU: Deep Gesture Recognition Utility [2]
    • <input type="checkbox" checked="" disabled="" /> Deep recurrent neural network with multiple GRU layers
    • <input type="checkbox" checked="" disabled="" /> Faster training than LSTM-based networks
    • <input type="checkbox" checked="" disabled="" /> Attention module for learning sub-sequence importance


Example of a classification algorithm (HMM sequence classifier)

Preprocessing methods

  • <input type="checkbox" checked="" disabled="" /> Centering, standardization and min-max scaling
  • <input type="checkbox" checked="" disabled="" /> Decimation and mean downsampling
  • <input type="checkbox" checked="" disabled="" /> Mean and median filtering

Installation

You can install Sequentia using pip.

pip install sequentia

Note: All tools under the sequentia.classifiers.rnn module (i.e. DeepGRU and collate_fn) require a working installation of torch (>= 1.8.0), and Sequentia assumes that you already have this installed.

Since there are many different possible configurations when installing PyTorch (e.g. CPU or GPU, CUDA version), we leave this up to the user instead of specifying particular binaries to install alongside TorchFSDD.

You can use the following if you really wish to install a CPU-only version of torch together with Sequentia.

pip install sequentia[torch]
Click here for installation instructions for contributing to Sequentia or running the notebooks.

If you intend to help contribute to Sequentia, you will need some additional dependencies for running tests, notebooks and generating documentation.

Depending on what you intend to do, you can specify the following extras.

  • For running tests in the /lib/test directory:

    pip install sequentia[test]
    
  • For generating Sphinx documentation in the /docs directory:

    pip install sequentia[docs]
    
  • For running notebooks in the /notebooks directory:

    pip install sequentia[notebooks]
    
  • A full development suite which installs all of the above extras:

    pip install sequentia[dev]
    

Documentation

Documentation for the package is available on Read The Docs.

Tutorials and examples

For detailed tutorials and examples on the usage of Sequentia, see the notebooks here.

Below are some basic examples of how univariate and multivariate sequences can be used in Sequentia.

Univariate sequences

import numpy as np, sequentia as seq

# Generate training observation sequences and labels
X, y = [
  np.array([1, 0, 5, 3, 7, 2, 2, 4, 9, 8, 7]),
  np.array([2, 1, 4, 6, 5, 8]),
  np.array([5, 8, 0, 3, 1, 0, 2, 7, 9])
], ['good', 'good', 'bad']

# Create and fit the classifier
clf = seq.KNNClassifier(k=1, classes=('good', 'bad'))
clf.fit(X, y)

# Make a prediction for a new observation sequence
x_new = np.array([0, 3, 2, 7, 9, 1, 1])
y_new = clf.predict(x_new)

Multivariate sequences

import numpy as np, sequentia as seq

# Generate training observation sequences and labels
X, y = [
  np.array([[1, 0, 5, 3, 7, 2, 2, 4, 9, 8, 7],
            [3, 8, 4, 0, 7, 1, 1, 3, 4, 2, 9]]).T,
  np.array([[2, 1, 4, 6, 5, 8],
            [5, 3, 9, 0, 8, 2]]).T,
  np.array([[5, 8, 0, 3, 1, 0, 2, 7, 9],
            [0, 2, 7, 1, 2, 9, 5, 8, 1]]).T
], ['good', 'good', 'bad']

# Create and fit the classifier
clf = seq.KNNClassifier(k=1, classes=('good', 'bad'))
clf.fit(X, y)

# Make a prediction for a new observation sequence
x_new = np.array([[0, 3, 2, 7, 9, 1, 1],
                  [2, 5, 7, 4, 2, 0, 8]]).T
y_new = clf.predict(x_new)

Acknowledgments

In earlier versions of the package (<0.10.0), an approximate dynamic time warping algorithm implementation (fastdtw) was used in hopes of speeding up k-NN predictions, as the authors of the original FastDTW paper [3] claim that approximated DTW alignments can be computed in linear memory and time - compared to the O(N^2) runtime complexity of the usual exact DTW implementation.

However, I was recently contacted by Prof. Eamonn Keogh (at University of California, Riverside), whose recent work [4] makes the surprising revelation that FastDTW is generally slower than the exact DTW algorithm that it approximates. Upon switching from the fastdtw package to dtaidistance (a very solid implementation of exact DTW with fast pure C compiled functions), DTW k-NN prediction times were indeed reduced drastically.

I would like to thank Prof. Eamonn Keogh for directly reaching out to me regarding this finding!

References

[1] Lawrence R. Rabiner. "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" Proceedings of the IEEE 77 (1989), no. 2, 257-86.
[2] Mehran Maghoumi & Joseph J. LaViola Jr. "DeepGRU: Deep Gesture Recognition Utility" Advances in Visual Computing, 14th International Symposium on Visual Computing, ISVC 2019, Proceedings, Part I, 16-31.
[3] Stan Salvador & Philip Chan. "FastDTW: Toward accurate dynamic time warping in linear time and space." Intelligent Data Analysis 11.5 (2007), 561-580.
[4] Renjie Wu & Eamonn J. Keogh. "FastDTW is approximate and Generally Slower than the Algorithm it Approximates" IEEE Transactions on Knowledge and Data Engineering (2020), 1–1.

Contributors

All contributions to this repository are greatly appreciated. Contribution guidelines can be found here.

Edwin Onuonga
Edwin Onuonga

✉️ 🌍
Prhmma
Prhmma

Sequentia © 2019-2022, Edwin Onuonga - Released under the MIT License.
Authored and maintained by Edwin Onuonga.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for sequentia, version 0.12.0
Filename, size File type Python version Upload date Hashes
Filename, size sequentia-0.12.0.tar.gz (47.2 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page