Skip to main content

Mock a datalake easily to be able to test your pyspark data application

Project description

pyspark-data-mocker

pyspark-data-mocker is a testing tool that facilitates the burden of setting up a desired datalake, so you can test easily the behavior of your data application. It configures also the spark session to optimize it for testing purpose.

Install

pip install pyspark-data-mocker

Usage

pyspark-data-mocker searches the directory you provide in order to seek and load files that can be interpreted as tables, storing them inside the datalake. That datalake will contain certain databases depending on the folders inside the root directory. For example, let's take a look into the basic_datalake

$ tree tests/data/basic_datalake -n --charset=ascii  # byexample: +rm=~ +skip
tests/data/basic_datalake
|-- grades
|   `-- exams.csv
`-- school
    |-- courses.csv
    `-- students.csv
~
2 directories, 3 files

This file hierarchy will be respected in the further datalake when loaded: each sub-folder will be considered as spark database, and each file will be loaded as table, using the filename to name the table.

How can we load them using pyspark-data-mocker? Really simple!

>>> from pyspark_data_mocker import DataLakeBuilder
>>> builder = DataLakeBuilder().load_from_dir("./tests/data/basic_datalake")  # byexample: +timeout=20 +pass

And that's it! you will now have in that execution context a datalake with the structure defined in the folder basic_datalake. Let's take a closer look by running some queries.

>>> from pyspark.sql import SparkSession
>>> spark = SparkSession.builder.getOrCreate()
>>> spark.sql("SHOW DATABASES").show()
+---------+
|namespace|
+---------+
|  default|
|   grades|
|   school|
+---------+

We have the default database (which came for free when instantiating spark), and the two folders inside tests/data/basic_datalake: school and grades.

>>> spark.sql("SHOW TABLES IN school").show()
+---------+---------+-----------+
|namespace|tableName|isTemporary|
+---------+---------+-----------+
|   school|  courses|      false|
|   school| students|      false|
+---------+---------+-----------+

>>> spark.sql("SELECT * FROM school.courses").show()
+---+------------+
| id| course_name|
+---+------------+
|  1|Algorithms 1|
|  2|Algorithms 2|
|  3|  Calculus 1|
+---+------------+


>>> spark.table("school.students").show()
+---+----------+---------+--------------------+------+----------+
| id|first_name|last_name|               email|gender|birth_date|
+---+----------+---------+--------------------+------+----------+
|  1|  Shirleen|  Dunford|sdunford0@amazona...|Female|1978-08-01|
|  2|      Niko|  Puckrin|npuckrin1@shinyst...|  Male|2000-11-28|
|  3|    Sergei|   Barukh|sbarukh2@bizjourn...|  Male|1992-01-20|
|  4|       Sal|  Maidens|smaidens3@senate.gov|  Male|2003-12-14|
|  5|    Cooper|MacGuffie| cmacguffie4@ibm.com|  Male|2000-03-07|
+---+----------+---------+--------------------+------+----------+

Note how it is already filled with the data each CSV file has! The tool supports all kind of files: csv, parquet, json. The application will infer which format to use by looking the file extension.

>>> spark.sql("SHOW TABLES IN grades").show()
+---------+---------+-----------+
|namespace|tableName|isTemporary|
+---------+---------+-----------+
|   grades|    exams|      false|
+---------+---------+-----------+

>>> spark.table("grades.exams").show()
+---+----------+---------+----------+----+
| id|student_id|course_id|      date|note|
+---+----------+---------+----------+----+
|  1|         1|        1|2022-05-01|   9|
|  2|         2|        1|2022-05-08|   7|
|  3|         3|        1|2022-06-17|   4|
|  4|         1|        3|2023-05-12|   9|
|  5|         2|        3|2023-05-12|  10|
|  6|         3|        3|2022-12-07|   7|
|  7|         4|        3|2022-12-07|   4|
|  8|         5|        3|2022-12-07|   2|
|  9|         1|        2|2023-05-01|   5|
| 10|         2|        2|2023-05-07|   8|
+---+----------+---------+----------+----+

Cleanup

You can easily clean the datalake by using the cleanup function

>>> builder.cleanup()
>>> spark.sql("SHOW DATABASES").show()
+---------+
|namespace|
+---------+
|  default|
+---------+

Documentation

You can check the full documentation to use all features available in pyspark-data-mocker here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyspark_data_mocker-3.0.0.tar.gz (23.0 kB view details)

Uploaded Source

Built Distribution

pyspark_data_mocker-3.0.0-py3-none-any.whl (24.1 kB view details)

Uploaded Python 3

File details

Details for the file pyspark_data_mocker-3.0.0.tar.gz.

File metadata

  • Download URL: pyspark_data_mocker-3.0.0.tar.gz
  • Upload date:
  • Size: 23.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.5.1 CPython/3.8.18 Linux/6.5.0-1017-azure

File hashes

Hashes for pyspark_data_mocker-3.0.0.tar.gz
Algorithm Hash digest
SHA256 cdf9c027d626fe2214f440406625b93362b790b71d166f6b2099b43d0c00dcfa
MD5 da72117e68f5f9b5989e9a4e8d0150a9
BLAKE2b-256 8dcbc68d308763855f22e08e1dc71dd97a0ce0bdecd07a834e462e75151bf8e8

See more details on using hashes here.

File details

Details for the file pyspark_data_mocker-3.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pyspark_data_mocker-3.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 61ec9549450453443d8d4c88830acb2a26f140888f1255506faa3df016aaaa2d
MD5 6520296499c7f79c85529d15e55f6975
BLAKE2b-256 325315cb57de5238d227aa591dcc15f10939272dca6e8d9ca92111d62b18998f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page