Skip to main content

Implementation of IncrementalDBSCAN clustering.

Project description

IncrementalDBSCAN

incdbscan is an implementation of IncrementalDBSCAN, the incremental version of the DBSCAN clustering algorithm.

IncrementalDBSCAN lets the user update the clustering by inserting or deleting data points. The algorithm yields the same result as DBSCAN but without reapplying DBSCAN to the modified data set.

Thus, IncrementalDBSCAN is ideal to use when the size of the data set to cluster is so large that applying DBSCAN to the whole data set would be costly but for the purpose of the application it is enough to update an already existing clustering by inserting or deleting some data points.

The implementation is based on the following paper. To see what's new compared to the paper, jump to Notes on the IncrementalDBSCAN paper.

Ester, Martin; Kriegel, Hans-Peter; Sander, Jörg; Wimmer, Michael; Xu, Xiaowei (1998). Incremental Clustering for Mining in a Data Warehousing Environment. In: Proceedings of the 24rd International Conference on Very Large Data Bases (VLDB 1998).

indbscan illustration

Table of Contents

Highlights

The incdbscan package is an implementation of the IncrementalDBSCAN algorithm by Ester et al., with about 40 unit tests covering diverse cases, and with additional corrections to the original paper.

Installation

incdbscan is on PyPI, and can be installed with pip:

pip install incdbscan

The latest version of the package requires at least Python 3.10.

Usage

The algorithm is implemented in the IncrementalDBSCAN class.

There are 3 methods to use:

  • insert for inserting data points into the clustering
  • delete for deleting data points from the clustering
  • get_cluster_labels for obtaining cluster labels

All methods take a batch of data points in the form of an array of shape (n_samples, n_features) (similar to the scikit-learn API).

from sklearn.datasets import load_iris
X = load_iris()['data']
X_1, X_2 = X[:100], X[100:]

from incdbscan import IncrementalDBSCAN
clusterer = IncrementalDBSCAN(eps=0.5, min_pts=5)

# Insert 1st batch of data points and get their labels
clusterer.insert(X_1)
labels_part1 = clusterer.get_cluster_labels(X_1)

# Insert 2nd batch and get labels of all points in a one-liner
labels_all = clusterer.insert(X_2).get_cluster_labels(X)

# Delete 1st batch and get labels for 2nd batch
clusterer.delete(X_1)
labels_part2 = clusterer.get_cluster_labels(X_2)

For a longer description of usage check out the notebook developed just for that!

Performance

Performance has two components: insertion and deletion cost. The results below are based on measurements using data sets in the 1K-10K size range.

indbscan performance

The cost of inserting a new data point with IncrementalDBSCAN is quite small and grows slower than the cost of applying (scikit-learns's) DBSCAN to a whole data set. In other words, given that we have a data set D clustered with IncrementalDBSCAN, and we want to see which cluster would a new object P belong to, it is faster to insert P into the current IncrementalDBSCAN clustering than to apply DBSCAN to the union of D and {P}.

The cost of deleting a data point with IncrementalDBSCAN is quite small and grows slower than the cost of applying DBSCAN to the data set minus that data point. In other words, given that we have a data set D clustered with IncrementalDBSCAN, and we want to see what happens to the clustering after removing an object P from the data set, it is faster to delete P from the existing IncrementalDBSCAN clustering than to apply DBSCAN to the difference of D and {P}.

These results do not imply that it is very efficient to cluster a whole data set with a series of IncrementalDBSCAN insertions. If we measure the time to cluster a data set with DBSCAN versus to cluster the data by adding the data points one by one to IncrementalDBSCAN, IncrementalDBSCAN will be slower compared to DBSCAN. A typical performance number is that clustering 8,000 data points takes about 10 seconds with this implementation.

See this notebook about performance for more details.

Known limitations

  • Batch insertion: In the current implementation batch insertion of data points is not efficient, since pairwise distance calculation between new and existing data points is not yet vectorized.
  • Deletion: Data point deletion can take long in big data sets (big clusters) because of a graph traversal step. There isn't any clear direction of making it more efficient algorithmically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

incdbscan-0.5.0.tar.gz (18.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

incdbscan-0.5.0-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file incdbscan-0.5.0.tar.gz.

File metadata

  • Download URL: incdbscan-0.5.0.tar.gz
  • Upload date:
  • Size: 18.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.17.0-14-generic

File hashes

Hashes for incdbscan-0.5.0.tar.gz
Algorithm Hash digest
SHA256 1582da133dfbbc82b1550fcb2324512b9f1e416b1d4dd94ff207bbd14f770e04
MD5 1e714b856b75fe22b7854fd82ea11a39
BLAKE2b-256 625f9e68cdc65d17d22e7c4a42cd4eb5edd3c79ae14f5bfcb2aac89e53536218

See more details on using hashes here.

File details

Details for the file incdbscan-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: incdbscan-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.17.0-14-generic

File hashes

Hashes for incdbscan-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 db13d207f00328c6bb1574cc09ea22c3a2093d7d6e7dd15cba83f8716861d5bb
MD5 edd6372bc3c5284b47534b51c867dcfa
BLAKE2b-256 1c191614f08d76012a54013f9f99943ff71cafee275e447f72ac40f8f3ecedb8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page