Skip to main content

Optimally partitioning data into piece-wise linear segments.

Project description

Nunchaku: Optimally partitioning data into piece-wise segments

nunchaku is a statistically rigorous, Bayesian algorithm to infer the optimal partitioning of a data set into contiguous piece-wise segments.

Who might find this useful?

Scientists and engineers who wish to detect change points within a dataset, at which the dependency of one variable on the other change.

For example, if y's underlying function is a piece-wise linear function of x, nunchaku will find the points at which the gradient and the intercept change.

What does it do?

Given a dataset with two variables (e.g. a 1D time series), it infers the piece-wise function that best approximates the dataset. The function can be a piece-wise constant function, a piece-wise linear function, or a piece-wise function described by linear combinations of arbitrary basis functions (e.g. polynomials, sines).

For piece-wise linear functions, it provides statistics for each segment, from which users select the segment(s) of most interest, for example, the one with the largest gradient or the one with the largest $R^2$.

For details about how it works, please refer to our paper, freely available on Bioinformatics.

Installation

To install via PyPI, type in Terminal (for Linux/Mac OS users) or Anaconda Prompt (for Windows users with Anaconda installed):

> pip install nunchaku

For developers, create a virtual environment, install poetry and then install nunchaku with Poetry:

> git clone https://git.ecdf.ed.ac.uk/s1856140/nunchaku.git
> cd nunchaku 
> poetry install --with dev 

Quick start

Data x is a list or a 1D Numpy array, sorted ascendingly; the data y is a list or a 1D Numpy array, or a 2D Numpy array with each row being one replicate of the measurement. Below is a script to analyse the built-in example data.

>>> from nunchaku import Nunchaku, get_example_data
>>> x, y = get_example_data()
>>> # load data and set the prior of the gradient
>>> nc = Nunchaku(x, y, prior=[-5, 5]) 
>>> # compare models with 1, 2, 3 and 4 linear segments
>>> numseg, evidences = nc.get_number(num_range=(1, 4))
>>> # get the mean and standard deviation of the boundary points
>>> bds, bds_std = nc.get_iboundaries(numseg)
>>> # get the information of all segments
>>> info_df = nc.get_info(bds)
>>> # plot the data and the segments
>>> nc.plot(info_df)
>>> # get the underlying piece-wise function (for piece-wise linear functions only)
>>> y_prediction = nc.predict(info_df)

More detailed examples are provided in a Jupyter Notebook in our repository.

Documentation

Detailed documentation is available on Readthedocs.

Development history

  • v0.15.0: supports detection of piece-wise functions described by a linear combination of arbitrary basis; supports Python 3.11.
  • v0.14.0: supports detection of linear segments.

Similar packages

  • The NOT package written in R.
  • The beast package written in R.

Citation

If you find this useful, please cite our paper:

Huo, Y., Li, H., Wang, X., Du, X., & Swain, P. S. (2023). Nunchaku: Optimally partitioning data into piece-wise linear segments. Bioinformatics. https://doi.org/10.1093/bioinformatics/btad688

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nunchaku-0.15.1.tar.gz (16.0 kB view hashes)

Uploaded Source

Built Distribution

nunchaku-0.15.1-py3-none-any.whl (17.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page