Skip to main content

Scikit-Learn compatible HMM and DTW based sequence machine learning algorithms in Python.

Project description


Sequentia

Scikit-Learn compatible HMM and DTW based sequence machine learning algorithms in Python.

About · Build Status · Features · Documentation · Examples · Acknowledgments · References · Contributors · Licensing

About

Sequentia is a Python package that provides various classification and regression algorithms for sequential data, including methods based on hidden Markov models and dynamic time warping.

Some examples of how Sequentia can be used on sequence data include:

  • determining a spoken word based on its audio signal or alternative representations such as MFCCs,
  • predicting motion intent for gesture control from sEMG signals,
  • classifying hand-written characters according to their pen-tip trajectories.

Why Sequentia?

  • Simplicity and interpretability: Sequentia offers a limited set of machine learning algorithms, chosen specifically to be more interpretable and easier to configure than more complex alternatives such as recurrent neural networks and transformers, while maintaining a high level of effectiveness.
  • Familiar and user-friendly: To fit more seamlessly into the workflow of data science practitioners, Sequentia follows the ubiquitous Scikit-Learn API, providing a familiar model development process for many, as well as enabling wider access to the rapidly growing Scikit-Learn ecosystem.

Build Status

master dev
CircleCI Build (Master) CircleCI Build (Development)

Features

Models

The following models provided by Sequentia all support variable length sequences.

Dynamic Time Warping + k-Nearest Neighbors (via dtaidistance)

  • Classification
  • Regression
  • Multivariate real-valued observations
  • Sakoe–Chiba band global warping constraint
  • Dependent and independent feature warping (DTWD/DTWI)
  • Custom distance-weighted predictions
  • Multi-processed predictions

Hidden Markov Models (via hmmlearn)

Parameter estimation with the Baum-Welch algorithm and prediction with the forward algorithm [1]

  • Classification
  • Multivariate real-valued observations (Gaussian mixture model emissions)
  • Univariate categorical observations (discrete emissions)
  • Linear, left-right and ergodic topologies
  • Multi-processed predictions

Scikit-Learn compatibility

Sequentia (≥2.0) is fully compatible with the Scikit-Learn API (≥1.4), enabling for rapid development and prototyping of sequential models.

In most cases, the only necessary change is to add a lengths key-word argument to provide sequence length information, e.g. fit(X, y, lengths=lengths) instead of fit(X, y).

Installation

The latest stable version of Sequentia can be installed with the following command:

pip install sequentia

C library compilation

For optimal performance when using any of the k-NN based models, it is important that dtaidistance C libraries are compiled correctly.

Please see the dtaidistance installation guide for troubleshooting if you run into C compilation issues, or if setting use_c=True on k-NN based models results in a warning.

You can use the following to check if the appropriate C libraries have been installed.

from dtaidistance import dtw
dtw.try_import_c()

Development

Please see the contribution guidelines to see installation instructions for contributing to Sequentia.

Documentation

Documentation for the package is available on Read The Docs.

Examples

Demonstration of classifying multivariate sequences with two features into two classes using the KNNClassifier.

This example also shows a typical preprocessing workflow, as well as compatibility with Scikit-Learn.

import numpy as np

from sklearn.preprocessing import scale
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline

from sequentia.models import KNNClassifier
from sequentia.preprocessing import IndependentFunctionTransformer, median_filter

# Create input data
# - Sequentia expects sequences to be concatenated into a single array
# - Sequence lengths are provided separately and used to decode the sequences when needed
# - This avoids the need for complex structures such as lists of arrays with different lengths

# Sequences
X = np.array([
    # Sequence 1 - Length 3
    [1.2 , 7.91],
    [1.34, 6.6 ],
    [0.92, 8.08],
    # Sequence 2 - Length 5
    [2.11, 6.97],
    [1.83, 7.06],
    [1.54, 5.98],
    [0.86, 6.37],
    [1.21, 5.8 ],
    # Sequence 3 - Length 2
    [1.7 , 6.22],
    [2.01, 5.49],
])

# Sequence lengths
lengths = np.array([3, 5, 2])

# Sequence classes
y = np.array([0, 1, 1])

# Create a transformation pipeline that feeds into a KNNClassifier
# 1. Individually denoise each sequence by applying a median filter for each feature
# 2. Individually standardize each sequence by subtracting the mean and dividing the s.d. for each feature
# 3. Reduce the dimensionality of the data to a single feature by using PCA
# 4. Pass the resulting transformed data into a KNNClassifier
pipeline = Pipeline([
    ('denoise', IndependentFunctionTransformer(median_filter)),
    ('scale', IndependentFunctionTransformer(scale)),
    ('pca', PCA(n_components=1)),
    ('knn', KNNClassifier(k=1))
])

# Fit the pipeline to the data - lengths must be provided
pipeline.fit(X, y, lengths=lengths)

# Predict classes for the sequences and calculate accuracy - lengths must be provided
y_pred = pipeline.predict(X, lengths=lengths)
acc = pipeline.score(X, y, lengths=lengths)

Acknowledgments

In earlier versions of the package, an approximate DTW implementation fastdtw was used in hopes of speeding up k-NN predictions, as the authors of the original FastDTW paper [2] claim that approximated DTW alignments can be computed in linear memory and time, compared to the O(N2) runtime complexity of the usual exact DTW implementation.

I was contacted by Prof. Eamonn Keogh whose work makes the surprising revelation that FastDTW is generally slower than the exact DTW algorithm that it approximates [3]. Upon switching from the fastdtw package to dtaidistance (a very solid implementation of exact DTW with fast pure C compiled functions), DTW k-NN prediction times were indeed reduced drastically.

I would like to thank Prof. Eamonn Keogh for directly reaching out to me regarding this finding.

References

[1] Lawrence R. Rabiner. "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition" Proceedings of the IEEE 77 (1989), no. 2, 257-86.
[2] Stan Salvador & Philip Chan. "FastDTW: Toward accurate dynamic time warping in linear time and space." Intelligent Data Analysis 11.5 (2007), 561-580.
[3] Renjie Wu & Eamonn J. Keogh. "FastDTW is approximate and Generally Slower than the Algorithm it Approximates" IEEE Transactions on Knowledge and Data Engineering (2020), 1–1.

Contributors

All contributions to this repository are greatly appreciated. Contribution guidelines can be found here.

eonu
eonu
Prhmma
Prhmma
manisci
manisci
jonnor
jonnor

Licensing

Sequentia is released under the MIT license.

Certain parts of the source code are heavily adapted from Scikit-Learn. Such files contain a copy of their license.


Sequentia © 2019-2025, Edwin Onuonga - Released under the MIT license.
Authored and maintained by Edwin Onuonga.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sequentia-2.0.0.tar.gz (4.5 MB view details)

Uploaded Source

Built Distribution

sequentia-2.0.0-py3-none-any.whl (4.5 MB view details)

Uploaded Python 3

File details

Details for the file sequentia-2.0.0.tar.gz.

File metadata

  • Download URL: sequentia-2.0.0.tar.gz
  • Upload date:
  • Size: 4.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.3 Linux/6.5.0-1016-azure

File hashes

Hashes for sequentia-2.0.0.tar.gz
Algorithm Hash digest
SHA256 5d7ec35d36e556aff302ebb3abf09a9ed6e6b6dd3e72fb8f0a9c847cf25dc023
MD5 0dedd8e1e2f9af64d7d7b06e28faa355
BLAKE2b-256 0f5c2d251dd04b44e3d8428ec133335826e9847ff0440c85122e0f2181c5c577

See more details on using hashes here.

File details

Details for the file sequentia-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: sequentia-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 4.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.3 Linux/6.5.0-1016-azure

File hashes

Hashes for sequentia-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f2d459a54cdf30e9100763406b8394bacd9347c3d1ea69902654a257080da336
MD5 5e2f7a0d9ade0b17ad69e0b4edcc6ac4
BLAKE2b-256 2f0352ba34c1ebaf787919a8be3fee57c2af3bf73f1130bf7df331375f58c7a3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page