Skip to main content
Join the official 2019 Python Developers SurveyStart the survey!

pytest plugin to run the tests with support of pyspark.

Project description

https://travis-ci.org/malexer/pytest-spark.svg?branch=master

pytest plugin to run the tests with support of pyspark (Apache Spark).

This plugin will allow to specify SPARK_HOME directory in pytest.ini and thus to make “pyspark” importable in your tests which are executed by pytest.

You can also define “spark_options” in pytest.ini to customize pyspark, including “spark.jars.packages” option which allows to load external libraries (e.g. “com.databricks:spark-xml”).

pytest-spark provides session scope fixtures spark_context and spark_session which can be used in your tests.

Install

$ pip install pytest-spark

Usage

Set Spark location

To run tests with required spark_home location you need to define it by using one of the following methods:

  1. Specify command line option “–spark_home”:

    $ pytest --spark_home=/opt/spark
    
  2. Add “spark_home” value to pytest.ini in your project directory:

    [pytest]
    spark_home = /opt/spark
    
  3. Set the “SPARK_HOME” environment variable.

pytest-spark will try to import pyspark from provided location.

Note

“spark_home” will be read in the specified order. i.e. you can override pytest.ini value by command line option.

Customize spark_options

Just define “spark_options” in your pytest.ini, e.g.:

[pytest]
spark_home = /opt/spark
spark_options =
    spark.app.name: my-pytest-spark-tests
    spark.executor.instances: 1
    spark.jars.packages: com.databricks:spark-xml_2.12:0.5.0

Using the spark_context fixture

Use fixture spark_context in your tests as a regular pyspark fixture. SparkContext instance will be created once and reused for the whole test session.

Example:

def test_my_case(spark_context):
    test_rdd = spark_context.parallelize([1, 2, 3, 4])
    # ...

Using the spark_session fixture (Spark 2.0 and above)

Use fixture spark_session in your tests as a regular pyspark fixture. A SparkSession instance with Hive support enabled will be created once and reused for the whole test session.

Example:

def test_spark_session_dataframe(spark_session):
    test_df = spark_session.createDataFrame([[1,3],[2,4]], "a: int, b: int")
    # ...

Development

Tests

Run tests locally:

$ docker-compose up --build

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for pytest-spark, version 0.5.2
Filename, size File type Python version Upload date Hashes
Filename, size pytest_spark-0.5.2-py3-none-any.whl (6.5 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size pytest-spark-0.5.2.tar.gz (5.2 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page