A PySpark transform registry with MLflow integration.
Project description
PySpark Transform Registry
A simplified library for registering and loading PySpark transform functions using MLflow's model registry.
Installation
pip install pyspark-transform-registry
uv add pyspark-transform-registry
Quick Start
Register a Function
from pyspark_transform_registry import register_transform
from pyspark.sql import DataFrame
import pyspark.sql.functions as F
def clean_data(df: DataFrame) -> DataFrame:
"""Remove invalid records and standardize data."""
return df.filter(F.col("amount") > 0).withColumn("status", F.lit("clean"))
# Register the transform
logged_model = register_transform(
func=clean_data,
name="analytics.etl.clean_data",
description="Data cleaning transformation"
)
Load and Use a Transform
from pyspark_transform_registry import load_transform, load_transform_uri
# Load the registered transform
clean_data_func = load_transform("analytics.etl.clean_data", version=1)
# Or
clean_data_func = load_transform_uri("transforms:/analytics.etl.clean_data/1")
# Use it on your data
result = clean_data_func(your_dataframe)
Features
- Simple API: Just two main functions -
register_transform()andload_transform() - Direct Registration: Register transforms directly from Python code
- File-based Registration: Load and register transforms from Python files
- Automatic Versioning: Integer-based versioning with automatic incrementing
- MLflow Integration: Built on MLflow's model registry
Usage Examples
Direct Transform Registration
from pyspark_transform_registry import register_transform
from pyspark.sql import DataFrame
import pyspark.sql.functions as F
def risk_scorer(df: DataFrame, threshold: float = 100.0) -> DataFrame:
"""Calculate risk scores based on amount."""
return df.withColumn(
"risk_score",
F.when(F.col("amount") > threshold, "high").otherwise("low")
)
# Register with metadata
register_transform(
func=risk_scorer,
name="finance.scoring.risk_scorer",
description="Risk scoring transformation",
extra_pip_requirements=["numpy>=1.20.0"],
tags={"team": "finance", "category": "scoring"}
)
File-based Registration
# transforms/data_processors.py
from pyspark.sql import DataFrame
import pyspark.sql.functions as F
def feature_engineer(df: DataFrame) -> DataFrame:
"""Create engineered features."""
return df.withColumn("feature_1", F.col("amount") * 2)
# Register from file
register_transform(
file_path="transforms/data_processors.py",
function_name="feature_engineer",
name="ml.features.feature_engineer",
description="Feature engineering pipeline"
)
Source Code Inspection
# Load a transform
transform = load_transform("retail.processing.process_orders", version=1)
# Get the original source code
source_code = transform.get_source()
print(source_code) # Shows the original function definition
# Get the original function for inspection
original_func = transform.get_original_function()
print(f"Function name: {original_func.__name__}")
print(f"Docstring: {original_func.__doc__}")
Managing Transform Dependencies
Install dependencies for registered transforms automatically:
from pyspark_transform_registry import install_transform_requirements
# Install all dependencies for a transform
install_transform_requirements("transforms:/analytics.etl.clean_data/1")
# Then load the transform (dependencies are now available)
transform = load_transform("analytics.etl.clean_data", version=1)
You can also exclude certain packages (useful when running in environments like Databricks where some packages are pre-installed):
# Install dependencies but exclude packages already available in the environment
install_transform_requirements(
"transforms:/analytics.etl.clean_data/1",
exclude_packages=["pyspark", "mlflow", "pandas"]
)
Requirements
- Python 3.9+
- PySpark 3.0+
- MLflow 3.0+
Development
# Install development dependencies
make install
# Run tests
make test
# Run linting and formatting
make check
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pyspark_transform_registry-0.12.0.tar.gz.
File metadata
- Download URL: pyspark_transform_registry-0.12.0.tar.gz
- Upload date:
- Size: 15.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
28959d95c2977d27a0abd1ab4e21e9ee987f5c1ea1b2baef55a793661b9ccff3
|
|
| MD5 |
267ad6b8cd6826bba55cfd08890349ac
|
|
| BLAKE2b-256 |
97c08988dc7a7a50134768e627ca2efffc86ce2ef2564b0ca325893137c2a44f
|
File details
Details for the file pyspark_transform_registry-0.12.0-py3-none-any.whl.
File metadata
- Download URL: pyspark_transform_registry-0.12.0-py3-none-any.whl
- Upload date:
- Size: 9.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c106134aee4d88ace8e8f72e8caf681b282d964ced31f386ca8d6f2acd6f1e09
|
|
| MD5 |
d4a15cb4ae58f631b5ce624c31b1c502
|
|
| BLAKE2b-256 |
ea743e541cb901733e2c6268f3eebde5778e090d0dd75ca2e3488237b63bd67f
|