Google Dataproc templates written in Python
Project description
Dataproc Templates (Python - PySpark)
- AzureBlobStorageToBigQuery
- BigQueryToGCS (blogpost link)
- CassandraToBigquery
- CassandraToGCS (blogpost link)
- GCSToBigQuery (blogpost link)
- GCSToBigTable (blogpost link)
- GCSToGCS(blogpost link)
- GCSToJDBC (blogpost link)
- GCSToMongo (blogpost link)
- HbaseToGCS
- HiveToBigQuery (blogpost link)
- HiveToGCS(blogpost link)
- JDBCToBigQuery (blogpost link)
- JDBCToGCS (blogpost link)
- JDBCToJDBC (blogpost link)
- KafkaToGCS
- KafkaToBigQuery
- MongoToGCS(blogpost link)
- PubSubLiteToBigtable
- PubSubLiteToGCS
- RedshiftToGCS(blogpost link)
- S3ToBigQuery
- SnowflakeToGCS(blogpost link)
- TextToBigQuery
Dataproc Templates (Python - PySpark) submit jobs to Dataproc Serverless using batches submit pyspark.
Run using PyPi package
In this README, you see instructions on how to submit Dataproc Serverless template jobs.
Currently, 3 options are described:
- Using bin/start.sh
- Using gcloud CLI
- Using Vertex AI
Those 3 options require you to clone this repo and start running the templates.
The Dataproc Templates PyPi package is a 4th option to run templates from a PySpark environment directly (Dataproc or local/another).
Example:
!pip3 install --user google-dataproc-templates==0.0.3
from dataproc_templates.bigquery.bigquery_to_gcs import BigQueryToGCSTemplate
from pyspark.sql import SparkSession
args = dict()
args["bigquery.gcs.input.table"] = "<bq_dataset>.<bq_table>"
args["bigquery.gcs.input.location"] = "<location>"
args["bigquery.gcs.output.format"] = "<format>"
args["bigquery.gcs.output.mode"] = "<mode>"
args["bigquery.gcs.output.location"] = "gs://<bucket_name/path>"
spark = SparkSession.builder \
.appName("BIGQUERYTOGCS") \
.enableHiveSupport() \
.getOrCreate()
template = BigQueryToGCSTemplate()
template.run(spark, args)
Pro Tip: Start a Dataproc Serverless Spark sessions in a Vertex AI managed notebook, and leverage a serverless Spark session, in which your job will run using Dataproc Serverless, instead of your local PySpark environment.
While this provides an easy way to get started, remember that the bin/start.sh already provides an easy way for you to, for example, specify required .jar dependencies. Using the PyPi package, you need to configure your PySpark sessions in accordance with the requirements of your specific template. You would need to, for example, specify the spark.driver.extraClassPath configuration:
spark = SparkSession.builder \
... \
.config('spark.driver.extraClassPath', '<template_required_dependency>.jar')
... \
.getOrCreate()
Setting up the local environment
It is recommended to use a virtual environment when setting up the local environment. This setup is not required for submitting templates, only for running and developing locally.
# Create a virtual environment, activate it and install requirements
mkdir venv
python -m venv venv/
source venv/bin/activate
pip install -r requirements.txt
Running unit tests
Unit tests are developed using pytest
.
To run all unit tests, simply run pytest:
pytest
To generate a coverage report, run the tests using coverage
coverage run \
--source=dataproc_templates \
--module pytest \
--verbose \
test
coverage report --show-missing
Submitting templates to Dataproc Serverless
A shell script is provided to:
- Build the python package
- Set Dataproc parameters based on environment variables
- Submit the desired template to Dataproc with the provided template parameters
When submitting, there are 3 types of properties/parameters for the user to provide.
- Spark properties: Refer to this documentation to see the available spark properties.
- Each template's specific parameters: refer to each template's README.
- Common arguments: --template_name and --log_level
- The --log_level parameter is optional, it defaults to INFO.
- Possible choices are the Spark log levels: ["ALL", "DEBUG", "ERROR", "FATAL", "INFO", "OFF", "TRACE", "WARN"].
- The --log_level parameter is optional, it defaults to INFO.
bin/start.sh usage:
# Set required environment variables
export GCP_PROJECT=<project_id>
export REGION=<region>
export GCS_STAGING_LOCATION=<gs://path>
# Set optional environment variables
export SUBNET=<subnet>
export JARS="gs://additional/dependency.jar"
export HISTORY_SERVER_CLUSTER=projects/{projectId}/regions/{regionId}/clusters/{clusterId}
export METASTORE_SERVICE=projects/{projectId}/locations/{regionId}/services/{serviceId}
# Submit to Dataproc passing template parameters
./bin/start.sh [--properties=<spark.something.key>=<value>] \
-- --template=TEMPLATENAME \
--log_level=INFO \
--my.property="<value>" \
--my.other.property="<value>"
(etc...)
gcloud CLI usage:
It is also possible to submit jobs using the gcloud
CLI directly. That can be achieved by:
- Building the
dataproc_templates
package into an.egg
PACKAGE_EGG_FILE=dist/dataproc_templates_distribution.egg
python setup.py bdist_egg --output=${PACKAGE_EGG_FILE}
- Submitting the job
- The
main.py
file should be the main python script - The
.egg
file for the package must be bundled using the--py-files
flag
gcloud dataproc batches submit pyspark \
--region=<region> \
--project=<project_id> \
--jars="<required_jar_dependencies>" \
--deps-bucket=<gs://path> \
--subnet=<subnet> \
--py-files=${PACKAGE_EGG_FILE} \
[--properties=<spark.something.key>=<value>] \
main.py \
-- --template=TEMPLATENAME \
--log_level=INFO \
--<my.property>="<value>" \
--<my.other.property>="<value>"
(etc...)
Vertex AI usage:
Follow Dataproc Templates (Jupyter Notebooks) README to submit Dataproc Templates from a Vertex AI notebook.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file google-dataproc-templates-0.3.0b0.tar.gz
.
File metadata
- Download URL: google-dataproc-templates-0.3.0b0.tar.gz
- Upload date:
- Size: 50.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7b0123ea9d97aa2ce9d8a5eecb8d8821af98697a1292c07fc6ae42a1639d6786 |
|
MD5 | 17b6556a06f736aaf35617cb8470ef83 |
|
BLAKE2b-256 | 70ca0a94332ef109f20efd572a94216f8ea19286e34400b515b21c81e45de059 |
File details
Details for the file google_dataproc_templates-0.3.0b0-py2.py3-none-any.whl
.
File metadata
- Download URL: google_dataproc_templates-0.3.0b0-py2.py3-none-any.whl
- Upload date:
- Size: 77.0 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b1c18631223bb03b0f79a05c6dae03174b31e22ec658735b4b962b6aedcd0873 |
|
MD5 | e1bc0bac4d6d3ab88716e1ea7f17cc82 |
|
BLAKE2b-256 | 0d0cf244c66f215cc22f01b2a003d3b22014ce5d917134e35087f7ff646d3b7d |