Skip to main content

Fast and memory-efficient DBSCAN clustering,possibly on various subsamples out of a common dataset

Project description

Overview

A fast and memory-efficient implementation of DBSCAN (Density-Based Spatial Clustering of Applications with Noise).

It is especially suited for multiple rounds of down-sampling and clustering from a joint dataset: after an initial overhead O(N log(N)), each subsequent run of clustering will have O(N) time complexity.

As illustrated by a doctest embedded in the present module’s docstring, on a dataset of 15,000 samples and 47 features, on a Asus Zenbook laptop with 8 GiB of RAM and an Intel Core M processor, DBSCAN_multiplex processes 50 rounds of sub-sampling and clustering in about 4 minutes, whereas Scikit-learn’s implementation of DBSCAN performs the same task in more than 28 minutes.

Such a test can be performed quite conveniently on your machine: simply entering python DBSCAN_multiplexin your terminal will prompt a doctest to start comparing the performance of the two afore-mentioned implementations of DBSCAN.

This 7-fold gain in performance proved critical to the statistical learning application that prompted the design of this algorithm.

Installation and Requirements

DBSCAN_multiplex requires a machine running any member of the Unix-like family of operaating systems, Python 2.7 along with the following packages and a few modules from the Standard Python Library:

  • NumPy >= 1.9

  • PyTables

  • scikit-learn

You can install DBSCAN_multiplex from the official Python Package Index (PyPI) as follows:

  • open a terminal window;

  • type in pip install DBSCAN_multiplex.

The command listed above should automatically install or upgrade any missing or outdated dependency among those listed at the beginning of this section.

Usage and Example

See the docstrings associated to each function of the DBSCAN_multiplex module for more information; in a Python interpreter console, they can be viewed by calling the built-in help system, e.g., help(DBSCAN_multiplex.load).

The following few lines show how DBSCAN_multiplex can be used for clustering 50 randomly selected subsamples out of a common Gaussian distributed dataset. This situation arises in consensus clustering where one might want to obtain and then combine multiple vectors of cluster labels.

>>> import numpy as np
>>> import DBSCAN_multiplex as DB

>>> data = np.random.randn(15000, 7)
>>> N_iterations = 50
>>> N_sub = 9 * data.shape[0] / 10
>>> subsamples_matrix = np.zeros((N_iterations, N_sub), dtype = int)
>>> for i in xrange(N_iterations):
        subsamples_matrix[i] = np.random.choice(data.shape[0], N_sub, replace = False)
>>> eps, labels_matrix = DB.DBSCAN(data, minPts = 3, subsamples_matrix = subsamples_matrix, verbose = True)

References

  • Ester, M., Kriegel, H.-P., Sander, J. and Xu, X. (1996) “A density-based algorithm for discovering clusters in large spatial databases with noise”. Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). AAAI Press. pp. 226-231.

  • Kriegel, H.-P., Kroeger, P., Sander, J. and Zimek, A. (2011) “Density-based Clustering”. WIREs Data Mining and Knowledge Discovery 1 (3): 231-240.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

DBSCAN_multiplex-1.5.tar.gz (10.0 kB view details)

Uploaded Source

File details

Details for the file DBSCAN_multiplex-1.5.tar.gz.

File metadata

File hashes

Hashes for DBSCAN_multiplex-1.5.tar.gz
Algorithm Hash digest
SHA256 3836d25b453af5bc4fcc6680aad27868081d8bca30a5200aa18a2991b0388346
MD5 767e25b9d76719c0bfc785eea016e0bd
BLAKE2b-256 65638b64de0b81326ad6bed0fce0df79016d8141961d44b3447c6d96cf275645

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page