A GridSearchCV-like hyperparameter tuner for clustering algorithms.
Project description
cluster-tuner
A GridSearchCV-like hyperparameter tuner for clustering algorithms.
Installation
pip install cluster-tuner
Requirements: Python >= 3.10, scikit-learn >= 1.6
Purpose
This project provides a simple, scikit-learn-compatible hyperparameter tuning tool for clustering. It's intended for situations where predicting clusters for new data points is a low priority. Many clustering algorithms in scikit-learn are transductive, meaning they are not designed to be applied to new observations. Even when using an inductive algorithm like KMeans, you might not need to predict clusters for new data—or prediction might be a lower priority than finding the best clusters.
Since scikit-learn's GridSearchCV uses cross-validation and is designed for inductive models, an alternative tool is necessary.
ClusterTuner
The ClusterTuner class is a hyperparameter search tool for clustering algorithms. It fits one model per hyperparameter combination and selects the best. The implementation is derived from scikit-learn's GridSearchCV, but without cross-validation. It works with clustering-specific scorers and doesn't always require a target variable, since metrics like silhouette, Calinski-Harabasz, and Davies-Bouldin are designed for unsupervised evaluation.
The interface is largely the same as GridSearchCV. Results are stored in the results_ attribute (cv_results_ also works as an alias for compatibility).
Basic Usage
from sklearn.cluster import DBSCAN
from cluster_tuner import ClusterTuner
tuner = ClusterTuner(
DBSCAN(),
param_grid={'eps': [0.3, 0.5, 0.7], 'min_samples': [5, 10]},
scoring='silhouette',
)
tuner.fit(X)
print(tuner.best_params_)
print(tuner.best_score_)
labels = tuner.labels_
# Access detailed results (single-metric uses 'test_score')
print(tuner.results_['test_score'])
Key Parameters
scoring: Metric name (string), callable, or list/dict for multi-metric evaluation.refit(default=True): Whether to refit the best estimator on the full dataset. For multi-metric, must be a string specifying which metric to use.max_noise(default=0.1): Maximum allowed ratio of noise points (label=-1). Fits exceeding this threshold receiveerror_score.min_cluster_size(default=3): Minimum allowed size for the smallest cluster. Fits with smaller clusters receiveerror_score.error_score(default=np.nan): Value to assign when a fit fails or violates constraints. Use'raise'to raise exceptions instead.n_jobs: Number of parallel jobs (-1 for all CPUs).
Multi-Metric Scoring
Evaluate multiple metrics simultaneously using a list, tuple, or dict:
tuner = ClusterTuner(
DBSCAN(),
param_grid={'eps': [0.3, 0.5, 0.7]},
scoring=['silhouette', 'calinski_harabasz', 'neg_davies_bouldin'],
refit='silhouette', # Required: which metric to use for selecting best
)
tuner.fit(X)
# Results use 'test_' prefix for each metric
print(tuner.results_['test_silhouette'])
print(tuner.results_['test_calinski_harabasz'])
print(tuner.results_['test_neg_davies_bouldin'])
Supervised Scoring
When ground truth labels are available, use supervised metrics:
from sklearn.cluster import KMeans
tuner = ClusterTuner(
KMeans(n_init='auto'),
param_grid={'n_clusters': [2, 3, 4, 5]},
scoring='adjusted_rand',
)
tuner.fit(X, y=y_true) # Pass ground truth labels
print(tuner.best_score_) # Adjusted Rand Index
Pipeline Support
ClusterTuner works with scikit-learn pipelines:
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
pipe = make_pipeline(
StandardScaler(),
PCA(n_components=10),
KMeans(n_init='auto'),
)
tuner = ClusterTuner(
pipe,
param_grid={'kmeans__n_clusters': [2, 3, 4, 5]},
scoring='silhouette',
)
tuner.fit(X)
Scorers
You can use ClusterTuner by passing the string name of a clustering metric, e.g., 'silhouette', 'calinski_harabasz', or 'adjusted_rand' (the _score suffix is optional).
Recognized Scorer Names
Unsupervised metrics (no ground truth required):
'silhouette'/'silhouette_score''silhouette_euclidean'/'silhouette_score_euclidean''silhouette_cosine'/'silhouette_score_cosine''neg_davies_bouldin'/'neg_davies_bouldin_score''calinski_harabasz'/'calinski_harabasz_score'
Supervised metrics (require ground truth labels y):
'mutual_info'/'mutual_info_score''normalized_mutual_info'/'normalized_mutual_info_score''adjusted_mutual_info'/'adjusted_mutual_info_score''rand'/'rand_score''adjusted_rand'/'adjusted_rand_score''completeness'/'completeness_score''fowlkes_mallows'/'fowlkes_mallows_score''homogeneity'/'homogeneity_score''v_measure'/'v_measure_score'
Naming Convention
Following sklearn's convention, metrics where lower is better use a neg_ prefix. The score is negated internally so that higher values always indicate better clustering:
'neg_davies_bouldin'— Davies-Bouldin index (lower raw values = better separation)
Custom Scorers
Create custom scorers using make_scorer:
from cluster_tuner import make_scorer
# Unsupervised scorer: score_func(X, labels)
def my_metric(X, labels):
return some_score
scorer = make_scorer(my_metric, ground_truth=False)
# Supervised scorer: score_func(y_true, labels)
def my_supervised_metric(y_true, labels):
return some_score
scorer = make_scorer(my_supervised_metric, ground_truth=True)
tuner = ClusterTuner(estimator, param_grid, scoring=scorer)
Caveats
Comparing Clustering Algorithms
Consider your dataset and goals before comparing clustering algorithms. A higher score doesn't necessarily mean a better choice—different algorithms have different benefits, drawbacks, and use cases.
Credits
Most of the credit goes to the scikit-learn developers for the engineering behind the search estimators.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cluster_tuner-0.1.1.tar.gz.
File metadata
- Download URL: cluster_tuner-0.1.1.tar.gz
- Upload date:
- Size: 33.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.17 {"installer":{"name":"uv","version":"0.9.17","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c2fded8df0acdebacb83fd5d630ce53f4bf57a7bb14303e26520786d1491dc2d
|
|
| MD5 |
c53c7d2a19085485677522510eb6fd4c
|
|
| BLAKE2b-256 |
0a0797a8e4b8f01cde01dd21284f72b55776c573586664e1d74cdda8e7ce2672
|
File details
Details for the file cluster_tuner-0.1.1-py3-none-any.whl.
File metadata
- Download URL: cluster_tuner-0.1.1-py3-none-any.whl
- Upload date:
- Size: 33.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.17 {"installer":{"name":"uv","version":"0.9.17","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c47a694ac4ebcf815a98f0246e35edb06749660a572f56a30c9d07b3576c672d
|
|
| MD5 |
27f16479d2e1973585a969931c3c88ec
|
|
| BLAKE2b-256 |
5a5287628b724add3cb30788db950db1bd99fd9c9249ea727fafbf88773d8ebb
|