NHOP metric, geometry-preserving seed selection, circular oversampling (GVM-CO, LRE-CO, LS-CO), and degradation-recovery benchmark
Project description
circover
NHOP metric, geometry-preserving seed selection, circular oversampling, and controlled degradation benchmark for imbalanced classification.
From the thesis: "From Distributional Similarity to Causal Imbalance: NHOP, Circular Oversampling, and a Controlled Degradation Study" — Parsa Hajiannejad, Università degli Studi di Milano, 2025.
Install
pip install circover
Modules
| Class | Description |
|---|---|
NHOP |
Normalised Histogram Overlap Percentage metric |
GeometricSeedSelector |
Geometry-preserving seed selection (NHOP + AGTP + JSD + Z) |
GVMCO |
Gravity-biased Von Mises Circular Oversampling |
LRECO |
Local Region Estimation Circular Oversampling (Voronoi-constrained) |
LSCO |
Layered Segmental Circular Oversampling |
DegradationBench |
Controlled degradation-and-recovery benchmark |
Quick start
import circover as cc
# NHOP: measure how faithfully synthetic data reproduces the original distribution
nhop = cc.NHOP(n_bins=30)
nhop.score(X_original, X_synthetic) # scalar in [0, 1]
nhop.score_per_feature(X_original, X_synth) # per-feature array
nhop.tv_per_feature(X_original, X_synth) # TV distance = 1 - NHOP
# Geometry-preserving seed selection
selector = cc.GeometricSeedSelector(n_seeds=20, random_state=42)
seed_indices, score = selector.select(X_minority)
# Circular oversamplers — drop-in replacements for SMOTE
from imblearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
pipe = Pipeline([
("over", cc.GVMCO(random_state=42)), # or LRECO, LSCO
("clf", RandomForestClassifier()),
])
pipe.fit(X_train, y_train)
Degradation-and-Recovery Benchmark
Run any oversampler (or pipeline) through a controlled degradation protocol to measure its recovery power:
bench = cc.DegradationBench(steps=10, metric="f1", cv=5, random_state=42)
results = bench.run(pipe, X, y) # DataFrame: degradation, score, n_minority
bench.plot(results) # recovery curve with ARI annotation
ari = cc.DegradationBench.area_recovery_index(results) # scalar summary
The DegradationBench:
- Removes minority samples in
stepsequal increments (0% → 100%) - Evaluates the estimator via cross-validation at each level
- Computes the Area Recovery Index (ARI) = ∫ score(δ) dδ — higher = better recovery
Compare multiple methods by their ARI:
results_smote = bench.run(smote_pipe, X, y)
results_gvm = bench.run(gvm_pipe, X, y)
ari_smote = cc.DegradationBench.area_recovery_index(results_smote)
ari_gvm = cc.DegradationBench.area_recovery_index(results_gvm)
print(f"SMOTE ARI: {ari_smote:.3f} GVM-CO ARI: {ari_gvm:.3f}")
Key parameters
cc.GVMCO(
n_clusters=5, # K-Means clusters on minority class
k_neighbors=5, # k-NN graph for circle formation
kappa_max=4.0, # max Von Mises concentration
use_pca=True, # False = native-dimension mode
random_state=42,
)
cc.NHOP(n_bins=30) # histogram bins B (default 30, stable range: 20-50)
cc.DegradationBench(
steps=10, # number of degradation levels
metric="f1", # sklearn scoring string
cv=5, # cross-validation folds
random_state=42,
)
All oversamplers are compatible with imbalanced-learn pipelines and sklearn cross-validation.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file circover-0.2.0.tar.gz.
File metadata
- Download URL: circover-0.2.0.tar.gz
- Upload date:
- Size: 12.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cdfbb36a36e9e714464947f2ce16c8b72ea1168ec2de2aadb71b53473e835d6b
|
|
| MD5 |
3e3e2c1f23a8ec1cd7c07ba24826f21e
|
|
| BLAKE2b-256 |
822a98960e06dc4c3dbab577851306cd1340ae2bf0d3b81a43b3ed33e78396ee
|