Skip to main content

This Versatile Data Kit SDK plugin is a Generative Data Pack, that expands each ingested dataset with the execution ID detected during data job run.

Project description

An installed Generative Data Pack plugin automatically expands the data sent for ingestion.

This GDP plugin detects the execution ID of a Data Job running, and decorates your data product with it. So that, it is now possible to correlate a data record with a particular ingestion Data Job execution ID.

Each ingested dataset gets automatically expanded with a Data Job execution ID micro-dimension. For example:

{
  "product_name": "name1",
  "product_description": "description1"
}

After installing vdk-gdp-execution-id, one additional field gets automatically appended to your payloads that are sent for ingestion:

{
  "product_name": "name1",
  "product_description": "description1",
  "gdp_execution_id": "product-ingestion-data-job-1628151700498"
}

The newly-added dimension name is configurable.

Usage

Run

pip install vdk-gdp-execution-id

Create a Data Job and add to its requirements.txt file:

# Python jobs can specify extra library dependencies in requirements.txt file.
# See https://pip.readthedocs.io/en/stable/user_guide/#requirements-files
# The file is optional and can be deleted if no extra library dependencies are necessary.
vdk-gdp-execution-id

Reconfigure the ingestion pre-processing sequence to add the plugin name. For example:

export VDK_INGEST_PAYLOAD_PREPROCESS_SEQUENCE="vdk-gdp-execution-id"
# or
export VDK_INGEST_PAYLOAD_PREPROCESS_SEQUENCE="[...,]vdk-gdp-execution-id"

Note: The recommendation is to add this plugin last (at end-of-sequence), due prior plugins may add new data records. For more info on configurations, see projects/vdk-core/src/vdk/internal/core/config.py.

Example ingestion Data Job 10_python_step.py:

def run(job_input: IJobInput):
    # object
    job_input.send_object_for_ingestion(
        payload={"product_name": "name1", "product_description": "description1"},
        destination_table="product")
    # tabular data
    job_input.send_tabular_data_for_ingestion(
        rows=[["name2", "description2"], ["name3", "description3"]],
        column_names=["product_name", "product_description"],
        destination_table="product")

In case the VDK_INGEST_METHOD_DEFAULT was a relational database, then you can query the dataset and filter:

# A processing Data Job then filters the ingested dataset by `vdk_gdp_execution_id` column
def run(job_input: IJobInput):
    execution_ids = job_input.execute_query("SELECT DISTINCT vdk_gdp_execution_id FROM product")
    print(execution_ids)

Configuration

Run vdk config-help - search for those prefixed with "GDP_EXECUTION_ID_" to see what configuration options are available.

Testing

Testing this plugin locally requires installing the dependencies listed in vdk-plugins/vdk-gdp-execution-id/requirements.txt

Run

pip install -r requirements.txt

Example

Find an example data job using vdk-gdp-execution-id plugin in examples/gdp-execution-id-example/.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vdk-gdp-execution-id-0.0.1066314998.tar.gz (3.8 kB view details)

Uploaded Source

File details

Details for the file vdk-gdp-execution-id-0.0.1066314998.tar.gz.

File metadata

File hashes

Hashes for vdk-gdp-execution-id-0.0.1066314998.tar.gz
Algorithm Hash digest
SHA256 f143af236a2a72c34fa183fc7a5450e91edd8b86ac71381ee7bd660637737119
MD5 e6c0a6dc715df18372e7d7177fe61d35
BLAKE2b-256 d49d692e1802e27c3bc0bfecbc2f10e1b78f1f872c67fa7455944b12569c436e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page