Skip to main content

A scikit-learn compatible splitter for deterministic, ID-based train/test splits that prevent data leakage.

Project description

Stable Hash Splitter

StableHashSplit provides deterministic, ID-based train/test splits so samples remain assigned to the same set across dataset updates. This prevents data leakage that can occur when old test samples reappear in training after you refresh or append data.

Key goals:

  • Reproducible splits across dataset versions
  • Seamless scikit-learn compatibility (CV and pipelines)
  • Minimal and flexible API for common workflows

Features

  • Deterministic & stable assignment using a hash of a stable identifier
  • scikit-learn compatible: implements split and get_n_splits
  • Works with pandas DataFrames, NumPy arrays, and array-likes
  • Customizable hash function and ID column; supports using the DataFrame index

Installation

pip install stable-hash-splitter

Quick start

import pandas as pd
from stable_hash_splitter import StableHashSplit

data = pd.DataFrame({
    'user_id': [1001, 1002, 1003, 1004, 1005],
    'feature_1': [0.5, 0.3, 0.8, 0.1, 0.9],
    'feature_2': [10, 20, 30, 40, 50],
    'target': [1, 0, 1, 0, 1]
})

splitter = StableHashSplit(test_size=0.2, id_column='user_id')
X_train, X_test, y_train, y_test = splitter.train_test_split(
    data[['user_id', 'feature_1', 'feature_2']],
    data['target']
)

print(f"Train size: {len(X_train)}, Test size: {len(X_test)}")

API reference

StableHashSplit(test_size=0.2, id_column='id', hash_func=None, random_state=None)

  • test_size (float): fraction of samples assigned to the test set (0 < test_size < 1).
  • id_column (str | int | None): column name or index with the stable identifier. If None and X is a DataFrame, the DataFrame index is used.
  • hash_func (callable): function that maps an identifier to a non-negative integer hash. Defaults to CRC32.
  • random_state: accepted for API compatibility but ignored; splits are deterministic.

Important notes

  • Deterministic: the same ID always maps to the same split.
  • For array inputs with no id_column provided, row indices are used as identifiers.
  • The class yields a single split (compatible with scikit-learn CV APIs).

Example: use in GridSearchCV

from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV

splitter = StableHashSplit(test_size=0.2, id_column='user_id')
model = RandomForestClassifier()

param_grid = {'n_estimators': [50, 100]}
grid_search = GridSearchCV(model, param_grid, cv=splitter)
grid_search.fit(X, y)  # X must include the 'user_id' column

Development & testing

Install in editable mode to develop locally:

pip install -e .
pip install pytest
pytest

Attribution

The concept and motivation for ID-based deterministic splits are inspired by Aurélien Géron's book "Hands-On Machine Learning with Scikit-Learn and PyTorch". This project is an independent implementation and not a copy of that work; the book influenced design patterns and best practices used here.

Contributing

Contributions welcome — please open issues or submit pull requests. See PUBLISH.md for publishing steps and CI instructions.

License

MIT — see the LICENSE file.

Stable Hash Splitter

A scikit-learn compatible splitter for deterministic, ID-based train/test splits. StableHashSplit prevents data leakage by assigning samples to train/test permanently based on a hash of a stable identifier (e.g., user ID, transaction ID).

🔧 Problem

Using random splits when datasets change can cause previous test samples to move into training sets, producing optimistic and invalid evaluations. StableHashSplit ensures reproducible, ID-based assignment so samples remain in the same split across dataset versions.

✨ Features

  • Deterministic & Stable: A given ID is always placed in the same set.
  • scikit-learn Compatible: Works with cross_val_score, GridSearchCV, and pipelines expecting a CV splitter.

📦 Installation

pip install stable-hash-splitter

🚀 Quick Start

import pandas as pd
from stable_hash_splitter import StableHashSplit

data = pd.DataFrame({
	'user_id': [1001, 1002, 1003, 1004, 1005],
	'feature_1': [0.5, 0.3, 0.8, 0.1, 0.9],
	'feature_2': [10, 20, 30, 40, 50],
	'target': [1, 0, 1, 0, 1]
})

splitter = StableHashSplit(test_size=0.2, id_column='user_id')
X_train, X_test, y_train, y_test = splitter.train_test_split(
	data[['user_id', 'feature_1', 'feature_2']],
	data['target']
)
print(f"Train size: {len(X_train)}, Test size: {len(X_test)}")

📚 Advanced Usage

Use in model selection with GridSearchCV:

from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV

splitter = StableHashSplit(test_size=0.2, id_column='user_id')
model = RandomForestClassifier()

param_grid = {'n_estimators': [50, 100]}
grid_search = GridSearchCV(model, param_grid, cv=splitter)
grid_search.fit(X, y)  # X must contain the 'user_id' column

🤝 Contributing

Contributions welcome — please open an issue or submit a pull request.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔧 Developing & Testing Locally

  1. Install in editable mode:
pip install -e .
  1. Run tests:
pytest

Stable Hash Splitter

A scikit-learn compatible splitter for deterministic, ID-based train/test splits. Solves the critical problem of data leakage when datasets grow or models are retrained, ensuring a data sample is permanently assigned to the same set based on a hash of its unique identifier.

🔧 The Problem

When you update your dataset and retrain a model, using a standard random split (like sklearn.model_selection.train_test_split) can cause data leakage: samples that were in your old test set can end up in your new training set, making your evaluation overly optimistic and invalid.

StableHashSplit fixes this by assigning samples to the train or test set deterministically based on a hash of a stable ID (like a user ID, transaction ID, or geographic coordinate).

✨ Features

  • 🔒 Deterministic & Stable: A given ID will always be placed in the same set.
  • 🤖 Full scikit-learn Compatibility: Can be used in cross_val_score, GridSearchCV, and any pipeline expecting a CV splitter.
  • 📁 Flexible Input: Works with pandas DataFrames, NumPy arrays, and any array-like structure.
  • ⚙️ Configurable: Use any hash function and specify the ID column by name or index.

📦 Installation

pip install stable-hash-splitter

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stable_hash_splitter-0.1.1.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stable_hash_splitter-0.1.1-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file stable_hash_splitter-0.1.1.tar.gz.

File metadata

  • Download URL: stable_hash_splitter-0.1.1.tar.gz
  • Upload date:
  • Size: 8.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for stable_hash_splitter-0.1.1.tar.gz
Algorithm Hash digest
SHA256 5fd5275e76b63b0b46de45b76b0e056fa01cc4a639a05228ba0dbf2ff1ad3977
MD5 b3ce30aa62b451de56615cd8c3b7041d
BLAKE2b-256 a6fbab63e7153df7ac28ae5dd9d7589239e3832ce95718ede9a9e69c2e94085e

See more details on using hashes here.

File details

Details for the file stable_hash_splitter-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for stable_hash_splitter-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1e44f85ebb69e090b5f85e61005182e58176fa26b359f6de259d1a0b1c7a9f06
MD5 c1c7224ecd6595684d3b01610bb8f6bc
BLAKE2b-256 83346be3fd78fdab329440ab36c0aa9e86ec8865cf592efd13ea6b96e5529b12

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page