Skip to main content

The Random Dilation Shapelet Transform algorithm and associated works

Project description

This package is moving to the aeon-toolkit.

Starting from v0.3.0, this package will not be updated, bugfixes will still be included if issues are raised. You can already find RDST in the Aeon package at https://github.com/aeon-toolkit/ . Further improvements are planned for further speeding up RDST, these improvement will only be implemented in aeon. All the functionnalities of this package will be ported into Aeon when I got some time, for now, only the transformer for univariate and multivariate series of even length have been implemented.

Readme

Welcome to the convst repository. It contains the implementation of the Random Dilated Shapelet Transform (RDST) along with other works in the same area. This work was supported by the following organisations:

Status

Overview
Compatibility !python-versions
CI/CD !pypi docs build
Code Quality lines CodeFactor
Downloads Downloads

Installation

The recommended way to install the latest stable version is to use pip with pip install convst. To install the package from sources, you can download the latest version on GitHub and run python setup.py install. This should install the package and automatically look for the dependencies using pip.

We recommend doing this in a new virtual environment using anaconda to avoid any conflict with an existing installation. If you wish to install dependencies individually, you can see dependencies in the setup.py file.

An optional dependency that can help speed up numba, which is used in our implementation, is the Intel vector math library (SVML). When using conda it can be installed by running conda install -c numba icc_rt. I didn't test the behavior with AMD processors, but I suspect it won't work.

Tutorial

We give here a minimal example to run the RDST algorithm on any dataset of the UCR archive using the aeon API to get datasets:

from convst.classifiers import R_DST_Ridge
from convst.utils.dataset_utils import load_UCR_UEA_dataset_split

X_train, X_test, y_train, y_test, _ = load_UCR_UEA_dataset_split('GunPoint')

# First run may be slow due to numba compilations on the first call. 
# Run a small dataset like GunPoint if this is the first time you call RDST on your system.
# You can change n_shapelets to 1 to make this process faster. The n_jobs parameter can
# also be changed to increase speed once numba compilation are done.

rdst = R_DST_Ridge(n_shapelets=10_000, n_jobs=1).fit(X_train, y_train)
print("Accuracy Score for RDST : {}".format(rdst.score(X_test, y_test)))

If you want a more powerful model, you can use R_DST_Ensemble as follows (note that additional Numba compilation might be needed here):

from convst.classifiers import R_DST_Ensemble

rdst_e = R_DST_Ensemble(
  n_shapelets_per_estimator=10_000,
  n_jobs=1
).fit(X_train, y_train)
print("Accuracy Score for RDST : {}".format(rdst_e.score(X_test, y_test)))

You can obtain faster result by using more jobs and even faster, at the expense of some accuracy, with the prime_dilation option:

rdst_e = R_DST_Ensemble(
  n_shapelets_per_estimator=10_000,
  prime_dilations=True,
  n_jobs=-1
).fit(X_train, y_train)

print("Accuracy Score for RDST : {}".format(rdst_e.score(X_test, y_test)))

You can also visualize a shapelet using the visualization tool to obtain such visualization :

Example of shapelet visualization

To know more about all the interpretability tools, check the documentation on readthedocs.

Supported inputs

RDST support the following type of time series:

  • Univariate and same length
  • Univariate and variable length
  • Multivariate and same length
  • Multivariate and variable length

We use the standard scikit-learn interface and expect as input a 3D numpy array of shape (n_samples, n_features, n_timestamps). For variable length input, we expect a (python) list of numpy arrays, or a numpy array with object dtype.

Reproducing the paper results

Multiple scripts are available under the PaperScripts folder. It contains the exact same scripts used to generate our results, notably the test_models.py file, used to generate the csv results available in the Results folder of the archive.

Contributing, Citing and Contact

If you are experiencing bugs in the RDST implementation, or would like to contribute in any way, please create an issue or pull request in this repository. For other question or to take contact with me, you can email me at antoine.guillaume45@gmail.com

If you use our algorithm or publication in any work, please cite the following paper (ArXiv version https://arxiv.org/abs/2109.13514):

@InProceedings{10.1007/978-3-031-09037-0_53,
author="Guillaume, Antoine
and Vrain, Christel
and Elloumi, Wael",
title="Random Dilated Shapelet Transform: A New Approach for Time Series Shapelets",
booktitle="Pattern Recognition and Artificial Intelligence",
year="2022",
publisher="Springer International Publishing",
address="Cham",
pages="653--664",
abstract="Shapelet-based algorithms are widely used for time series classification because of their ease of interpretation, but they are currently outperformed by recent state-of-the-art approaches. We present a new formulation of time series shapelets including the notion of dilation, and we introduce a new shapelet feature to enhance their discriminative power for classification. Experiments performed on 112 datasets show that our method improves on the state-of-the-art shapelet algorithm, and achieves comparable accuracy to recent state-of-the-art approaches, without sacrificing neither scalability, nor interpretability.",
isbn="978-3-031-09037-0"
}

To cite the RDST Ensemble method, you can cite the PhD thesis where it is presented as (soon to be available, citation format may change):

@phdthesis{Guillaume2023,
  author="Guillaume, Antoine", 
  title="Time series classification with Shapelets: Application to predictive maintenance on event logs",
  school="University of Orléans",
  year="2023",
  url="https://www.theses.fr/s265104"
}

TODO for relase 1.0:

  • Finish Numpy docs in all python files
  • Update documentation and examples
  • Enhance interface for interpretability tools
  • Add the Generalised version of RDST
  • Continue unit tests and code coverage/quality

Citations

Here are the code-related citations that were not made in the paper

[1]: The Scikit-learn development team, "Scikit-learn: Machine Learning in Python", Journal of Machine Learning Research 2011

[2]: The Numpy development team, "Array programming with NumPy", Nature 2020

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

convst-0.3.1.tar.gz (45.6 kB view details)

Uploaded Source

Built Distribution

convst-0.3.1-py3-none-any.whl (55.7 kB view details)

Uploaded Python 3

File details

Details for the file convst-0.3.1.tar.gz.

File metadata

  • Download URL: convst-0.3.1.tar.gz
  • Upload date:
  • Size: 45.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for convst-0.3.1.tar.gz
Algorithm Hash digest
SHA256 6985da3de86e4c19a02bfd9b2d65a039e95113a82a3acc2d8c4b9927efa33a86
MD5 92613f342864c451f8fbefe4fcac0f10
BLAKE2b-256 741a946664ec28f3341c4c013e1b7b5357389d1dfdf20971dc48278e85707922

See more details on using hashes here.

File details

Details for the file convst-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: convst-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 55.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for convst-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c61eba66eee720bd36fa6c07fe78e994070bd84c51235c17f399b16210737715
MD5 d1f6d9a6b62f8ef7a45f35abf176158b
BLAKE2b-256 766e6eef3a72259f443deb6008621e303e143fd2d57b6729cdb6c96170748b88

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page