Professional spreadsheet wrangling utilities for parsing, splitting, and expanding schedule data
Project description
ScheduleTools
Professional spreadsheet wrangling utilities for parsing, splitting, and expanding schedule data.
Features
- Flexible Parsing: Parse schedule data from various formats with configurable date/time formats and block detection
- Smart Splitting: Split CSV data into multiple files based on grouping criteria with optional filtering
- Column Expansion: Transform data to match specific output formats with configurable mappings
- Dual Interface: Use as a Python library for programmatic access or as a CLI tool for file operations
- Professional Design: Clean API, comprehensive error handling, and type hints
Installation
pip install scheduletools
For development installation:
git clone https://github.com/yourusername/scheduletools.git
cd scheduletools
pip install -e ".[dev]"
Usage
Programmatic Usage
from scheduletools import ScheduleParser, CSVSplitter, ScheduleExpander
import pandas as pd
# Parse schedule with default settings
parser = ScheduleParser("schedule.txt", reference_date="2025-07-21")
parsed_data = parser.parse()
# Parse with custom configuration
custom_config = {
"Format": {
"Date": "%m/%d/%Y",
"Time": "%I:%M %p"
},
"Block Detection": {
"start_marker": "Date",
"skip_meta_rows": True
},
"Missing Values": {
"Omit": True,
"Replacement": "TBD"
}
}
parser = ScheduleParser(
"schedule.txt",
reference_date="2025-07-21",
config=custom_config
)
parsed_data = parser.parse()
# Split by team
splitter = CSVSplitter(parsed_data, "Team")
team_schedules = splitter.split()
# Expand with template
expander = ScheduleExpander(team_schedules["16U"], expansion_template)
expanded_data = expander.expand()
As a CLI Tool
# Parse a schedule file with default block marker
schtool parse schedule.txt -o parsed_schedule.csv
# Parse with custom date column name
schtool parse schedule.txt --date-column "Day" -o parsed_schedule.csv
# Split by team
schtool split parsed_schedule.csv -g Team -o team_schedules/
# Expand with template
schtool expand team_schedules/Team_A.csv template.json -o final_schedule.csv
# Complete workflow
schtool process schedule.txt -o output/ -t template.json
Documentation
ScheduleParser
Parse schedule data from various formats into structured DataFrames.
from scheduletools import ScheduleParser
# Basic usage with default date column name ("Date")
parser = ScheduleParser("schedule.txt")
df = parser.parse()
# With custom date column name
parser = ScheduleParser("schedule.txt", date_column_name="Day")
df = parser.parse()
# With custom configuration
parser = ScheduleParser(
"schedule.txt",
config_path="config.json",
reference_date="2025-09-02",
date_column_name="Day"
)
df = parser.parse()
Configuration Format:
{
"Format": {
"Date": "%m/%d/%Y",
"Time": "%I:%M %p",
"Duration": "H:MM"
},
"Block Detection": {
"date_column_name": "Date"
},
"Missing Values": {
"Omit": true,
"Replacement": "missing"
},
"Split": {
"Skip": false,
"Separator": "/"
}
}
Block Detection:
The parser uses a configurable date column name to identify where schedule blocks begin. The date_column_name specifies the name of the date column (which is always the first column in each block). By default, it looks for "Date" in the first column of each row. When parsing blocks, rows without valid dates in the date column are automatically skipped.
CSVSplitter
Split CSV data into multiple DataFrames based on grouping criteria.
from scheduletools import CSVSplitter
# Split by single column
splitter = CSVSplitter("data.csv", "Team")
teams = splitter.split()
# Split by multiple columns with filtering
splitter = CSVSplitter(
"data.csv",
["Week", "Team"],
include_values=["Week_1", "Week_2"],
exclude_values=["Team_C"]
)
filtered_groups = splitter.split()
ScheduleExpander
Expand schedule data to include required columns with mappings and defaults.
from scheduletools import ScheduleExpander
# Expand with configuration
config = {
"Required": ["Date", "Time", "Team", "Location", "Notes"],
"defaults": {
"Location": "Main Arena",
"Notes": ""
},
"Mapping": {
"Start Time": "Time",
"Team Name": "Team"
}
}
expander = ScheduleExpander("input.csv", config)
expanded_df = expander.expand()
Configuration
ScheduleParser supports flexible configuration through config objects or JSON files. Configuration options include:
Format Settings
Date: Date format string (default:"%m/%d/%Y")Time: Time format string (default:"%I:%M %p")Duration: Duration format (default:"H:MM")
Block Detection
date_column_name: Name of the date column that indicates the start of a block (default:"Date")
Missing Values
Omit: Whether to omit missing values (default:True)Replacement: Value to use for missing entries (default:"missing")
Split Settings
Skip: Whether to skip team splitting (default:False)Separator: Character to split team names (default:"/")
Example Configuration
{
"Format": {
"Date": "%m/%d/%Y",
"Time": "%I:%M %p",
"Duration": "H:MM"
},
"Block Detection": {
"date_column_name": "Date"
},
"Missing Values": {
"Omit": true,
"Replacement": "TBD"
},
"Split": {
"Skip": false,
"Separator": "/"
}
}
CLI Commands
schtool parse
License
This project is licensed under the MIT License - see the LICENSE file for details.
Changelog
0.3.0
- Improved Field Naming: Changed
start_markertodate_column_namefor better clarity - Dynamic Data Detection: Replaced hard-coded row indices with automatic detection of where data starts
- Optimized Parsing: Combined block extraction and processing into a single efficient loop
- Simplified Block Detection: Removed meta pattern checking and
skip_meta_rowsconfiguration - Date-Only Validation: Now only validates that the date column contains valid dates, automatically skipping invalid rows
- Cleaner Configuration: Simplified Block Detection section to only include
date_column_name - Updated Documentation: Clarified that
date_column_namespecifies the date column name
0.2.0
- Enhanced Configuration System: Added support for passing config objects directly to ScheduleParser
- Improved Block Detection: Fixed block boundary detection logic for more reliable parsing
- Better Error Handling: Enhanced error messages and exception handling for configuration files
- Meta Row Detection: Improved handling of empty strings and meta-information rows
- Complete Workflow Support: Fixed end-to-end workflow testing and validation
- Documentation Updates: Added comprehensive configuration documentation and examples
0.1.0
- Initial release
- Core parsing, splitting, and expansion functionality
- CLI interface with comprehensive commands
- Professional API design with type hints
- Comprehensive error handling
- Configurable block detection with custom markers
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scheduletools-0.3.0.tar.gz.
File metadata
- Download URL: scheduletools-0.3.0.tar.gz
- Upload date:
- Size: 19.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d786990daf1ace897a2057f69dabfb5cfe6e56b372391b40285e24794e51afb7
|
|
| MD5 |
967c6c008b30a0f0570a4895c0a91dac
|
|
| BLAKE2b-256 |
29d2d30f637dd327adc6299048a51d932aa7349fc371fd65ebbeecad6e33c67c
|
Provenance
The following attestation bundles were made for scheduletools-0.3.0.tar.gz:
Publisher:
publish.yml on Khlick/scheduletools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scheduletools-0.3.0.tar.gz -
Subject digest:
d786990daf1ace897a2057f69dabfb5cfe6e56b372391b40285e24794e51afb7 - Sigstore transparency entry: 270369536
- Sigstore integration time:
-
Permalink:
Khlick/scheduletools@ca8274495f51683c14954b6b25414f9dc28fb02e -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/Khlick
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ca8274495f51683c14954b6b25414f9dc28fb02e -
Trigger Event:
push
-
Statement type:
File details
Details for the file scheduletools-0.3.0-py3-none-any.whl.
File metadata
- Download URL: scheduletools-0.3.0-py3-none-any.whl
- Upload date:
- Size: 14.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5cb43f4b301137871b93dfc464f805f6a60b43bb81b73fc08ff3f295f2bb8d1a
|
|
| MD5 |
da395a40f1bb9dda9d3536efb3a8d7c2
|
|
| BLAKE2b-256 |
55507a75d7ace9ed18d587546c280f283db36f2b729282975d95ad2061920011
|
Provenance
The following attestation bundles were made for scheduletools-0.3.0-py3-none-any.whl:
Publisher:
publish.yml on Khlick/scheduletools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
scheduletools-0.3.0-py3-none-any.whl -
Subject digest:
5cb43f4b301137871b93dfc464f805f6a60b43bb81b73fc08ff3f295f2bb8d1a - Sigstore transparency entry: 270369541
- Sigstore integration time:
-
Permalink:
Khlick/scheduletools@ca8274495f51683c14954b6b25414f9dc28fb02e -
Branch / Tag:
refs/tags/v0.3.0 - Owner: https://github.com/Khlick
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ca8274495f51683c14954b6b25414f9dc28fb02e -
Trigger Event:
push
-
Statement type: