Skip to main content

Common pyspark utils

Project description

AI Helpers - PySpark utils

Quality Gate Status Bugs Code Smells

pyspark-utils is a Python module that provides a collection of utilities to simplify and enhance the use of PySpark. These utilities are designed to make working with PySpark more efficient and to reduce boilerplate code.

Table of Contents

Installation

You can install the pyspark-utils module via pip:

pip install ai-helpers-pyspark-utils

Getting Started

First, import the module in your Python script:

import pyspark_utils as psu

Now you can use the utilities provided by pyspark-utils.

Utilities & Examples

  • get_spark_session: Recover appropriate SparkSession.

    Create a spark dataframe:

    >>> import pyspark_utils as psu
    
    >>> spark = psu.get_spark_session("example")
    >>> sdf = spark.createDataFrame(
          [
              [None, "a", 1, 1.0],
              ["b", "b", 1, 2.0],
              ["b", "b", None, 3.0],
              ["c", "c", None, 2.0],
              ["c", "c", 3, 4.0],
              ["d", None, 4, 2.0],
              ["d", None, 5, 6.0],
          ],
          ["col0", "col1", "col2", "col3"],
      )
    >>> sdf.show()
    +----+----+----+----+
    |col0|col1|col2|col3|
    +----+----+----+----+
    |NULL|   a|   1| 1.0|
    |   b|   b|   1| 2.0|
    |   b|   b|NULL| 3.0|
    |   c|   c|NULL| 2.0|
    |   c|   c|   3| 4.0|
    |   d|NULL|   4| 2.0|
    |   d|NULL|   5| 6.0|
    +----+----+----+----+ 
    
  • with_columns: Use multiple 'withColumn' calls on a dataframe in a single command.

    >>> import pyspark_utils as psu
    >>> import pyspark.sql.functions as F
    
    >>> col4 = F.col("col3") + 2
    >>> col5 = F.lit(True)
    
    >>> transformed_sdf = psu.with_columns(
      sdf, 
      col_func_mapping={"col4": col4, "col5": col5}
      )
    >>> transformed_sdf.show()
    +----+----+----+----+----+----+
    |col0|col1|col2|col3|col4|col5|
    +----+----+----+----+----+----+
    |NULL|   a|   1| 1.0| 3.0|true|
    |   b|   b|   1| 2.0| 4.0|true|
    |   b|   b|NULL| 3.0| 5.0|true|
    |   c|   c|NULL| 2.0| 4.0|true|
    |   c|   c|   3| 4.0| 6.0|true|
    |   d|NULL|   4| 2.0| 4.0|true|
    |   d|NULL|   5| 6.0| 8.0|true|
    +----+----+----+----+----+----+
    
  • keep_first_rows: Keep the first row of each group defined by partition_cols and order_cols.

    >>> transformed_sdf = psu.utils.keep_first_rows(sdf, [F.col("col0")], [F.col("col3")])
    >>> transformed_sdf.show()
    +----+----+----+----+
    |col0|col1|col2|col3|
    +----+----+----+----+
    |NULL|   a|   1| 1.0|
    |   b|   b|   1| 2.0|
    |   c|   c|NULL| 2.0|
    |   d|NULL|   4| 2.0|
    +----+----+----+----+
    
  • assert_cols_in_df: Assserts that all specified columns are present in specified dataframe.

  • assert_df_close: Asserts that two dataframes are (almost) equal, even if the order of the columns is different.

Contributing

We welcome contributions to pyspark-utils. To contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature-branch).
  3. Make your changes.
  4. Commit your changes (git commit -am 'Add some feature').
  5. Push to the branch (git push origin feature-branch).
  6. Create a new Pull Request.

Please ensure your code follows the project's coding standards and includes appropriate tests.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_helpers_pyspark_utils-0.1.0a4.tar.gz (4.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_helpers_pyspark_utils-0.1.0a4-py3-none-any.whl (4.2 kB view details)

Uploaded Python 3

File details

Details for the file ai_helpers_pyspark_utils-0.1.0a4.tar.gz.

File metadata

  • Download URL: ai_helpers_pyspark_utils-0.1.0a4.tar.gz
  • Upload date:
  • Size: 4.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.1 CPython/3.10.13 Linux/6.8.0-1020-azure

File hashes

Hashes for ai_helpers_pyspark_utils-0.1.0a4.tar.gz
Algorithm Hash digest
SHA256 198a9df3ee77ea7bf16cf3d2e47617ec8937afa2de53d39eef5136862238cee9
MD5 85227682246ede7f591a832584802361
BLAKE2b-256 85edf0b789f21600b7af201054f0ca82a3122f410ce1af619668d9b0acd8520b

See more details on using hashes here.

File details

Details for the file ai_helpers_pyspark_utils-0.1.0a4-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_helpers_pyspark_utils-0.1.0a4-py3-none-any.whl
Algorithm Hash digest
SHA256 eedc65d46a39a3fd775ee9f8a9319475fe0984270466a195ab16e7f67eedece9
MD5 3865fc6e5301dab7c45d138ea0b76e33
BLAKE2b-256 4d2c4a42e6de6f7efc5a922bb12de05925102829d6f6fbe5c6f84242ee3893fb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page