Skip to main content

A PySpark transform registry with MLflow integration.

Project description

PySpark Transform Registry

A simplified library for registering and loading PySpark transform functions using MLflow's model registry.

Installation

pip install pyspark-transform-registry
uv add pyspark-transform-registry

Quick Start

Register a Function

from pyspark_transform_registry import register_transform
from pyspark.sql import DataFrame
import pyspark.sql.functions as F

def clean_data(df: DataFrame) -> DataFrame:
    """Remove invalid records and standardize data."""
    return df.filter(F.col("amount") > 0).withColumn("status", F.lit("clean"))

# Register the transform
logged_model = register_transform(
    func=clean_data,
    name="analytics.etl.clean_data",
    description="Data cleaning transformation"
)

Load and Use a Transform

from pyspark_transform_registry import load_transform, load_transform_uri

# Load the registered transform
clean_data_func = load_transform("analytics.etl.clean_data", version=1)

# Or
clean_data_func = load_transform_uri("transforms:/analytics.etl.clean_data/1")

# Use it on your data
result = clean_data_func(your_dataframe)

Features

  • Simple API: Just two main functions - register_transform() and load_transform()
  • Direct Registration: Register transforms directly from Python code
  • File-based Registration: Load and register transforms from Python files
  • Automatic Versioning: Integer-based versioning with automatic incrementing
  • MLflow Integration: Built on MLflow's model registry

Usage Examples

Direct Transform Registration

from pyspark_transform_registry import register_transform
from pyspark.sql import DataFrame
import pyspark.sql.functions as F

def risk_scorer(df: DataFrame, threshold: float = 100.0) -> DataFrame:
    """Calculate risk scores based on amount."""
    return df.withColumn(
        "risk_score",
        F.when(F.col("amount") > threshold, "high").otherwise("low")
    )

# Register with metadata
register_transform(
    func=risk_scorer,
    name="finance.scoring.risk_scorer",
    description="Risk scoring transformation",
    extra_pip_requirements=["numpy>=1.20.0"],
    tags={"team": "finance", "category": "scoring"}
)

File-based Registration

# transforms/data_processors.py
from pyspark.sql import DataFrame
import pyspark.sql.functions as F

def feature_engineer(df: DataFrame) -> DataFrame:
    """Create engineered features."""
    return df.withColumn("feature_1", F.col("amount") * 2)
# Register from file
register_transform(
    file_path="transforms/data_processors.py",
    function_name="feature_engineer",
    name="ml.features.feature_engineer",
    description="Feature engineering pipeline"
)

Source Code Inspection

# Load a transform
transform = load_transform("retail.processing.process_orders", version=1)

# Get the original source code
source_code = transform.get_source()
print(source_code)  # Shows the original function definition

# Get the original function for inspection
original_func = transform.get_original_function()
print(f"Function name: {original_func.__name__}")
print(f"Docstring: {original_func.__doc__}")

Managing Transform Dependencies

Install dependencies for registered transforms automatically:

from pyspark_transform_registry import install_transform_requirements

# Install all dependencies for a transform
install_transform_requirements("transforms:/analytics.etl.clean_data/1")

# Then load the transform (dependencies are now available)
transform = load_transform("analytics.etl.clean_data", version=1)

You can also exclude certain packages (useful when running in environments like Databricks where some packages are pre-installed):

# Install dependencies but exclude packages already available in the environment
install_transform_requirements(
    "transforms:/analytics.etl.clean_data/1",
    exclude_packages=["pyspark", "mlflow", "pandas"]
)

Requirements

  • Python 3.9+
  • PySpark 3.0+
  • MLflow 3.0+

Development

# Install development dependencies
make install

# Run tests
make test

# Run linting and formatting
make check

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyspark_transform_registry-0.11.0.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pyspark_transform_registry-0.11.0-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file pyspark_transform_registry-0.11.0.tar.gz.

File metadata

File hashes

Hashes for pyspark_transform_registry-0.11.0.tar.gz
Algorithm Hash digest
SHA256 0f134aa0622f2fa81c9a8d48aaf063f71490f2248ebf1f1eeffbf96be47bb14b
MD5 6964a735552ed6737fb860d291544b7d
BLAKE2b-256 0bb6a598b62d3fc08b3fd53b983b29107fec3f0757437b0859a860028f1060af

See more details on using hashes here.

File details

Details for the file pyspark_transform_registry-0.11.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pyspark_transform_registry-0.11.0-py3-none-any.whl
Algorithm Hash digest
SHA256 35c7cc9999f396c5618300d417773f35099390bbea03c368486fb64e9004528c
MD5 50a232667d4e27ba8af69bf4efbc2709
BLAKE2b-256 f373b744100f1c9e70cef3da28f8b631f097ff88ffd87b284221139255e09873

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page