Generic data handling utilities including data splitting and analysis.
Project description
dsr-data-tools
Data analysis and exploration tools for exploratory data analysis (EDA).
Version 1.4.2: Tightened RecommendationManager.save_to_yaml compatibility with the latest dsr-files path typing so YAML persistence works cleanly with local paths, cloud paths, and static type checking.
Features
- Dataset Analysis: Comprehensive statistical summaries and data quality assessment.
- Data Exploration: Tools for understanding data distributions, correlations, and patterns.
- Quality Metrics: Missing value detection, data type analysis, and anomaly identification.
- Statistically Guided Feature Interactions: Automatic discovery of meaningful feature interactions using Mutual Information and Pearson Correlation.
- Recommendation Engine: Intelligent pipeline for Boolean mapping, Numerical casting, and Datetime standardization with customizable execution priority.
- User-Guided ColumnHints: Explicitly guide the engine with metadata for financial, geospatial, or temporal data to override automated heuristics.
- Intelligent Boolean Mapping: Detects and standardizes diverse truthiness indicators (e.g., "Y/N", "Active/Inactive", "1/0") into proper boolean types.
- Cyclic Feature Extraction: Decomposes datetimes into periodic Sine/Cosine features to preserve temporal relationships for machine learning.
- Numerical Precision Optimization: Standardize decimal depth using configurable rounding modes (Nearest, Bankers, Up, Down).
- Metadata-Driven Customization: Use class-level metadata to define "editable" fields, enabling seamless integration with YAML-based orchestration.
Installation
pip install dsr-data-tools
pip install dsr-data-tools
Usage
import pandas as pd
from dsr_data_tools import analyze_dataset
# Load your data
df = pd.read_csv('data.csv')
# Perform comprehensive analysis
analyze_dataset(df)
Datetime Conversion Recommendation
generate_recommendations() detects object/string columns that are likely datetimes and recommends converting them to a proper datetime dtype.
import pandas as pd
from dsr_data_tools.analysis import generate_recommendations
from dsr_data_tools.recommendations import apply_recommendations
# Example column with mostly valid date strings
df = pd.DataFrame({
'date_str': [
'2025-01-01', '2025-01-02', '2025-01-03',
'2025-01-04', 'invalid', # one invalid value
] * 10 # scale up rows
})
recs = generate_recommendations(df)
# If detected, apply the datetime conversion recommendation
if 'date_str' in recs and 'datetime_conversion' in recs['date_str']:
df_converted = apply_recommendations(df, {
'date_str': recs['date_str']['datetime_conversion']
})
# Column is now datetime64; invalid entries coerced to NaT
print(df_converted['date_str'].dtype) # datetime64[ns]
Boolean Classification
# The engine now handles semantic mapping, recognizing 'Y' as True
# based on common indicators rather than just alphabetical order
from dsr_data_tools.recommendations import BooleanClassificationRecommendation
df = pd.DataFrame({"active": ["Y", "N", "Y"]})
rec = BooleanClassificationRecommendation(
column_name="active",
description="Convert to bool",
values=["Y", "N"]
)
# Returns [True, False, True]
df_bool = rec.apply(df)
Date Durations
Calculate the numeric duration between two datetime columns in specific units such as 'seconds', 'minutes', 'hours', or 'days'.
from dsr_data_tools.recommendations import DatetimeDurationRecommendation
rec = DatetimeDurationRecommendation(
start_column="order_date",
end_column="delivery_date",
output_column="days_to_deliver",
unit="days"
)
df = rec.apply(df)
Interactive Missing Value handling
The engine allows choosing between statistical imputation (mean/median/mode), constant filling, or row/column removal.
from dataclasses import fields
# Discover which fields are whitelisted for user edits in your pipeline
editable_fields = [
f.name for f in fields(rec)
if f.metadata.get("editable", False)
]
# Returns: ['strategy', 'fill_value', 'notes', 'enabled', 'alias']
Guided Recommendations with ColumnHints
Users can provide a ColumnHint to specify the 'logical type' of a column and set constraints like rounding, bounds, or specific feature extraction needs.
import pandas as pd
from dsr_data_tools.analysis import RecommendationManager
from dsr_data_tools.recommendations import ColumnHint, RoundingMode
# Load data
df = pd.read_csv('data.csv')
# Define explicit hints to override or guide the engine
hints = {
"unit_price": ColumnHint.financial(decimal_places=2, rounding_mode=RoundingMode.BANKERS),
"user_id": ColumnHint.numeric(convert_to_int=True),
"internal_notes": ColumnHint.ignore()
}
manager = RecommendationManager()
manager.generate_recommendations(df, hints=hints)
# Display the recommended pipeline
for rec in manager._pipeline:
rec.info()
Performance
This library is optimized for large-scale data processing using vectorized operations.
- Vectorized Integer Checks: Optimized from $O(N)$ Python-level application to vectorized modulo operations, resulting in a 5-6× speed increase.
- Cached Data Scans: Common operations like dropna() and unique() are cached to minimize redundant data scans across wide datasets.
- Efficient Scaling: Outlier handling and scaling utilize NumPy vectorized operations and Scikit-Learn transformers for high throughput.
Benchmarks
A benchmark script compares per-element apply(is_integer) against a vectorized modulo check. On large series, the vectorized approach is typically 5–6× faster.
python scripts/benchmark_integer_checks.py # default size (2,000,000)
python scripts/benchmark_integer_checks.py 5000000 # custom size
Or via Makefile target:
make benchmark # default N=2,000,000
make benchmark N=5000000 # custom size
Requirements
- Python >= 3.10
- dsr-utils >= 1.4.0
- numpy >= 2.4.4
- pandas >= 3.0.2
- scikit-learn >= 1.8.0
License
MIT License - see LICENSE file for details
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dsr_data_tools-1.4.2.tar.gz.
File metadata
- Download URL: dsr_data_tools-1.4.2.tar.gz
- Upload date:
- Size: 54.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e705e669017b0b12f15430dd96c494bcd347229e79acabc990b0cfea6c32b1b
|
|
| MD5 |
ab6937dbcd6ab6efc17e8c40302f8763
|
|
| BLAKE2b-256 |
bd328cff9bc5d9fa094a793db3fd4beb3b00639f365b3bfb19fe9d577cee2644
|
Provenance
The following attestation bundles were made for dsr_data_tools-1.4.2.tar.gz:
Publisher:
python-publish.yml on scottroberts140/dsr-data-tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dsr_data_tools-1.4.2.tar.gz -
Subject digest:
6e705e669017b0b12f15430dd96c494bcd347229e79acabc990b0cfea6c32b1b - Sigstore transparency entry: 1354024067
- Sigstore integration time:
-
Permalink:
scottroberts140/dsr-data-tools@dc648b7a3b3b95ee745d6d8801bb069db2a12073 -
Branch / Tag:
refs/tags/v1.4.2 - Owner: https://github.com/scottroberts140
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@dc648b7a3b3b95ee745d6d8801bb069db2a12073 -
Trigger Event:
release
-
Statement type:
File details
Details for the file dsr_data_tools-1.4.2-py3-none-any.whl.
File metadata
- Download URL: dsr_data_tools-1.4.2-py3-none-any.whl
- Upload date:
- Size: 51.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8e7ef25534fd30e30d4aecaaa19e2da803ebbf34cb3bc4bbc032469417e090a5
|
|
| MD5 |
ce21dda65f10c5bf26c035792a64b6f2
|
|
| BLAKE2b-256 |
470327ac9331eefc6ca4d8b93b8d0bc384cd59abfd993f223add18605d834a55
|
Provenance
The following attestation bundles were made for dsr_data_tools-1.4.2-py3-none-any.whl:
Publisher:
python-publish.yml on scottroberts140/dsr-data-tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dsr_data_tools-1.4.2-py3-none-any.whl -
Subject digest:
8e7ef25534fd30e30d4aecaaa19e2da803ebbf34cb3bc4bbc032469417e090a5 - Sigstore transparency entry: 1354024171
- Sigstore integration time:
-
Permalink:
scottroberts140/dsr-data-tools@dc648b7a3b3b95ee745d6d8801bb069db2a12073 -
Branch / Tag:
refs/tags/v1.4.2 - Owner: https://github.com/scottroberts140
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
python-publish.yml@dc648b7a3b3b95ee745d6d8801bb069db2a12073 -
Trigger Event:
release
-
Statement type: