Skip to main content

Implementation of IncrementalDBSCAN clustering.

Project description

IncrementalDBSCAN

incdbscan is an implementation of IncrementalDBSCAN, the incremental version of the DBSCAN clustering algorithm.

IncrementalDBSCAN lets the user update the clustering by inserting or deleting data points. The algorithm yields the same result as DBSCAN but without reapplying DBSCAN to the modified data set.

Thus, IncrementalDBSCAN is ideal to use when the size of the data set to cluster is so large that applying DBSCAN to the whole data set would be costly but for the purpose of the application it is enough to update an already existing clustering by inserting or deleting some data points.

The implementation is based on the following paper. To see what's new compared to the paper, jump to Notes on the IncrementalDBSCAN paper.

Ester, Martin; Kriegel, Hans-Peter; Sander, Jörg; Wimmer, Michael; Xu, Xiaowei (1998). Incremental Clustering for Mining in a Data Warehousing Environment. In: Proceedings of the 24rd International Conference on Very Large Data Bases (VLDB 1998).

indbscan illustration

Table of Contents

Highlights

The incdbscan package is an implementation of the IncrementalDBSCAN algorithm by Ester et al., with about 40 unit tests covering diverse cases, and with additional corrections to the original paper.

Installation

incdbscan is on PyPI, and can be installed with pip:

pip install incdbscan

The latest version of the package requires at least Python 3.9.

Usage

The algorithm is implemented in the IncrementalDBSCAN class.

There are 3 methods to use:

  • insert for inserting data points into the clustering
  • delete for deleting data points from the clustering
  • get_cluster_labels for obtaining cluster labels

All methods take a batch of data points in the form of an array of shape (n_samples, n_features) (similar to the scikit-learn API).

from sklearn.datasets import load_iris
X = load_iris()['data']
X_1, X_2 = X[:100], X[100:]

from incdbscan import IncrementalDBSCAN
clusterer = IncrementalDBSCAN(eps=0.5, min_pts=5)

# Insert 1st batch of data points and get their labels
clusterer.insert(X_1)
labels_part1 = clusterer.get_cluster_labels(X_1)

# Insert 2nd batch and get labels of all points in a one-liner
labels_all = clusterer.insert(X_2).get_cluster_labels(X)

# Delete 1st batch and get labels for 2nd batch
clusterer.delete(X_1)
labels_part2 = clusterer.get_cluster_labels(X_2)

For a longer description of usage check out the notebook developed just for that!

Performance

Performance has two components: insertion and deletion cost. The results below are based on measurements using data sets in the 1K-10K size range.

indbscan performance

The cost of inserting a new data point with IncrementalDBSCAN is quite small and grows slower than the cost of applying (scikit-learns's) DBSCAN to a whole data set. In other words, given that we have a data set D clustered with IncrementalDBSCAN, and we want to see which cluster would a new object P belong to, it is faster to insert P into the current IncrementalDBSCAN clustering than to apply DBSCAN to the union of D and P.

The cost of deleting a data point with IncrementalDBSCAN grows slower than the cost of applying DBSCAN to the data set minus that data point. In other words, given that we have a data set D clustered with IncrementalDBSCAN, and we want to see what happens to the clustering after removing an object P from the data set, it is faster to delete P from the existing IncrementalDBSCAN clustering than to apply DBSCAN to the difference of D and {P}.

These results do not imply that it is very efficient to cluster a whole data set with a series of IncrementalDBSCAN insertions. If we measure the time to cluster a data set with DBSCAN versus to cluster the data by adding the data points one by one to IncrementalDBSCAN, IncrementalDBSCAN will be slower compared to DBSCAN. A typical performance number is that clustering 8,000 data points takes about 10-20 seconds with this implementation.

See this notebook about performance for more details.

Known limitations

  • Batch insertion: In the current implementation batch insertion of data points is not efficient, since pairwise distance calculation between new and existing data points is not yet vectorized.
  • Deletion: Data point deletion can take long in big data sets (big clusters) because of a graph traversal step. There isn't any clear direction of making it more efficient algorithmically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

incdbscan-0.4.0.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

incdbscan-0.4.0-py3-none-any.whl (21.1 kB view details)

Uploaded Python 3

File details

Details for the file incdbscan-0.4.0.tar.gz.

File metadata

  • Download URL: incdbscan-0.4.0.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.14.0-37-generic

File hashes

Hashes for incdbscan-0.4.0.tar.gz
Algorithm Hash digest
SHA256 c4d68dc222446d28bdb3aa51fc27b96638fb7ba569791ceebc245ab74679f607
MD5 996dbe0447ce70fa09c3455293eba8e4
BLAKE2b-256 cf9daa53b71112fc93e326015018291a1ef79af318c3f624309fc3d0277fc524

See more details on using hashes here.

File details

Details for the file incdbscan-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: incdbscan-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 21.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.12.3 Linux/6.14.0-37-generic

File hashes

Hashes for incdbscan-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4d4221a3141ae229d70483dd831a750b54c0d7a8947d466b11a2b3a16890e4ee
MD5 b17803cb1ea16270b4e7a0b882d4b68f
BLAKE2b-256 e8222f35c18817656f1eef0825e209d9eca08b14e661634660a2a8c35ee1d2f4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page