Skip to main content

Snowpark column and table statistics collection

Project description

snowpark-checkpoints-collectors


This package is on Public Preview.

snowpark-checkpoints-collector package offers a function for extracting information from PySpark dataframes. We can then use that data to validate against the converted Snowpark dataframes to ensure that behavioral equivalence has been achieved.


Install the library

pip install snowpark-checkpoints-collectors

This package requires PySpark to be installed in the same environment. If you do not have it, you can install PySpark alongside Snowpark Checkpoints by running the following command:

pip install "snowpark-checkpoints-collectors[pyspark]"

Features

  • Schema inference collected data mode (Schema): This is the default mode, which leverages Pandera schema inference to obtain the metadata and checks that will be evaluated for the specified dataframe. This mode also collects custom data from columns of the DataFrame based on the PySpark type.
  • DataFrame collected data mode (DataFrame): This mode collects the data of the PySpark dataframe. In this case, the mechanism saves all data of the given dataframe in parquet format. Using the default user Snowflake connection, it tries to upload the parquet files into the Snowflake temporal stage and create a table based on the information in the stage. The name of the file and the table is the same as the checkpoint.

Functionalities

Collect DataFrame Checkpoint

from pyspark.sql import DataFrame as SparkDataFrame
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
from typing import Optional

# Signature of the function
def collect_dataframe_checkpoint(
    df: SparkDataFrame,
    checkpoint_name: str,
    sample: Optional[float] = None,
    mode: Optional[CheckpointMode] = None,
    output_path: Optional[str] = None,
) -> None:
    ...
  • df: The input Spark dataframe to collect.
  • checkpoint_name: Name of the checkpoint schema file or dataframe.
  • sample: Fraction of DataFrame to sample for schema inference, defaults to 1.0.
  • mode: The mode to execution the collection (Schema or Dataframe), defaults to CheckpointMode.Schema.
  • output_path: The output path to save the checkpoint, defaults to current working directory.

Skip DataFrame Checkpoint Collection

from pyspark.sql import DataFrame as SparkDataFrame
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
from typing import Optional

# Signature of the function
def xcollect_dataframe_checkpoint(
    df: SparkDataFrame,
    checkpoint_name: str,
    sample: Optional[float] = None,
    mode: Optional[CheckpointMode] = None,
    output_path: Optional[str] = None,
) -> None:
    ...

The signature of the method is the same of collect_dataframe_checkpoint.

Usage Example

Schema mode

from pyspark.sql import SparkSession
from snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode

spark_session = SparkSession.builder.getOrCreate()
sample_size = 1.0

pyspark_df = spark_session.createDataFrame(
    [("apple", 21), ("lemon", 34), ("banana", 50)], schema="fruit string, age integer"
)

collect_dataframe_checkpoint(
    pyspark_df,
    checkpoint_name="collect_checkpoint_mode_1",
    sample=sample_size,
    mode=CheckpointMode.SCHEMA,
)

Dataframe mode

from pyspark.sql import SparkSession
from snowflake.snowpark_checkpoints_collector import collect_dataframe_checkpoint
from snowflake.snowpark_checkpoints_collector.collection_common import CheckpointMode
from pyspark.sql.types import StructType, StructField, ByteType, StringType, IntegerType 

spark_schema = StructType(
    [
        StructField("BYTE", ByteType(), True),
        StructField("STRING", StringType(), True),
        StructField("INTEGER", IntegerType(), True)
    ]
)

data = [(1, "apple", 21), (2, "lemon", 34), (3, "banana", 50)]

spark_session = SparkSession.builder.getOrCreate()
pyspark_df = spark_session.createDataFrame(data, schema=spark_schema).orderBy(
    "INTEGER"
)

collect_dataframe_checkpoint(
    pyspark_df,
    checkpoint_name="collect_checkpoint_mode_2",
    mode=CheckpointMode.DATAFRAME,
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

snowpark_checkpoints_collectors-0.3.3.tar.gz (56.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

snowpark_checkpoints_collectors-0.3.3-py3-none-any.whl (66.6 kB view details)

Uploaded Python 3

File details

Details for the file snowpark_checkpoints_collectors-0.3.3.tar.gz.

File metadata

File hashes

Hashes for snowpark_checkpoints_collectors-0.3.3.tar.gz
Algorithm Hash digest
SHA256 82fb7ae4c20ba50e606e21abe92896361f1048a5ae23d6e6247dc81993c44ee5
MD5 b5ba903cb8d53087381d23e847c13d7c
BLAKE2b-256 4cbab9f11f69ba41fbfd57682176481c041c55c43c7b3a7218f22661c73234ec

See more details on using hashes here.

File details

Details for the file snowpark_checkpoints_collectors-0.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for snowpark_checkpoints_collectors-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0209f801d993272903d27bf7adff5f77962b818a41fd379ff45c53cdf660fd4a
MD5 0f35e195767cc36663c2343bd496e7bf
BLAKE2b-256 b62757bdf8c1c6b285a7bcd0fb481ee014eeea3cc39998be1639e35ce7aaa9cb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page