Base PySpark application for running Merlin prediction batch job
Project description
Merlin Batch Predictor
Merlin Batch Predictor is a PySpark application for running batch prediction job in Merlin system.
Usage
The application accept a yaml file for configuring source, model, and sink of the prediction job. The schema of the configuration file is described by the proto file. An example of the config file is as follow.
kind: PredictionJob
version: v1
name: integration-test
bigquerySource:
table: "project.dataset.table_iris"
features:
- sepal_length
- sepal_width
- petal_length
- petal_width
model:
type: PYFUNC_V2
uri: gs://bucket-name/e2e/artifacts/model
result:
type: DOUBLE
bigquerySink:
table: "project.dataset.table_iris_result"
result_column: "prediction"
save_mode: OVERWRITE
options:
project: "project"
temporaryGcsBucket: "bucket-name"
The above prediction job specification will read data from bigquery-public-data:samples.shakespeare
Bigquery table,
run prediction using a PYFUNC_V2
model located at gs://bucket-name/mlflow/6/2c3703fbbf9f4866b26e4cf91641f02c/artifacts/model
GCS bucket,
and write the result to another bigquery table project.dataset.table
.
To start the application locally you need:
- Set
GOOGLE_APPLICATION_CREDENTIALS
environment variable and point it to the service account which has following privileges:- Storage Writer for the
temporaryGcsBucket
- Storage Object Writer for
temporaryGcsBucket
- BigQuery Job User
- BigQuery Read Session User
- BigQuery Data Reader from the source dataset
- BigQuery Data Editor for the destination dataset
- Storage Writer for the
Then you can invoke
python main.py --job-name <job-name> --spec-path <path-to-spec-yaml> --local
In mac OS you need to set OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES python main.py --job-name <job-name> --spec-path <path-to-spec-yaml> --local
For example
OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES python main.py --job-name iris-prediction --spec-path sample/sample_1.yaml --local
Development
Requirements
- python >= 3.8.0
- pipenv (install using
pip install pipenv
) - protoc (see installation instruction)
- gcloud (see installation instruction)
- docker (see installation instruction)
Setup Dev Dependencies
make setup
Run all test
You need to set GOOGLE_APPLICATION_CREDENTIALS
and point it to service account file which has following privileges:
- BigQuery Job User
- BigQuery Read Session User
- BigQuery Data Editor for dataset project:dataset
- Storage Writer for bucket-name bucket
- Storage Object Writer for bucket-name bucket
make test
Run only unit test
make unit-test
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file merlin_batch_predictor-0.45.3.tar.gz
.
File metadata
- Download URL: merlin_batch_predictor-0.45.3.tar.gz
- Upload date:
- Size: 41.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8207ce9a0950ad80a4605c32b5931953d888924cc304d251dd7a451ce5873403 |
|
MD5 | 767d77456d40fcbccdd68355eb1a3ee4 |
|
BLAKE2b-256 | 43785f5cdeb2630713e706ce821a4d3e0a5a221dc6decead7304b0f9f6de398f |
File details
Details for the file merlin_batch_predictor-0.45.3-py3-none-any.whl
.
File metadata
- Download URL: merlin_batch_predictor-0.45.3-py3-none-any.whl
- Upload date:
- Size: 15.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.8.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6e2a4971b4b1d27f8fbce3475fc3e819eded469b8ee28ee84afe7f86adbcb102 |
|
MD5 | 43414865e18d2f6cf3e873c86f9cf925 |
|
BLAKE2b-256 | 9d4a5e42e72cfde9c5417427c4127477fc2ed72ea2554ff9252b0cdc34b6fa79 |