Skip to main content

This Versatile Data Kit SDK plugin is a Generative Data Pack, that expands each ingested dataset with the execution ID detected during data job run.

Project description

monthly download count for vdk-gdp-execution-id

An installed Generative Data Pack plugin automatically expands the data sent for ingestion.

This GDP plugin detects the execution ID of a Data Job running, and decorates your data product with it. So that, it is now possible to correlate a data record with a particular ingestion Data Job execution ID.

Each ingested dataset gets automatically expanded with a Data Job execution ID micro-dimension. For example:

{
  "product_name": "name1",
  "product_description": "description1"
}

After installing vdk-gdp-execution-id, one additional field gets automatically appended to your payloads that are sent for ingestion:

{
  "product_name": "name1",
  "product_description": "description1",
  "gdp_execution_id": "product-ingestion-data-job-1628151700498"
}

The newly-added dimension name is configurable.

Usage

Run

pip install vdk-gdp-execution-id

Create a Data Job and add to its requirements.txt file:

# Python jobs can specify extra library dependencies in requirements.txt file.
# See https://pip.readthedocs.io/en/stable/user_guide/#requirements-files
# The file is optional and can be deleted if no extra library dependencies are necessary.
vdk-gdp-execution-id

Reconfigure the ingestion pre-processing sequence to add the plugin name. For example:

export VDK_INGEST_PAYLOAD_PREPROCESS_SEQUENCE="vdk-gdp-execution-id"
# or
export VDK_INGEST_PAYLOAD_PREPROCESS_SEQUENCE="[...,]vdk-gdp-execution-id"

Note: The recommendation is to add this plugin last (at end-of-sequence), due prior plugins may add new data records. For more info on configurations, see projects/vdk-core/src/vdk/internal/core/config.py.

Example ingestion Data Job 10_python_step.py:

def run(job_input: IJobInput):
    # object
    job_input.send_object_for_ingestion(
        payload={"product_name": "name1", "product_description": "description1"},
        destination_table="product")
    # tabular data
    job_input.send_tabular_data_for_ingestion(
        rows=[["name2", "description2"], ["name3", "description3"]],
        column_names=["product_name", "product_description"],
        destination_table="product")

In case the VDK_INGEST_METHOD_DEFAULT was a relational database, then you can query the dataset and filter:

# A processing Data Job then filters the ingested dataset by `vdk_gdp_execution_id` column
def run(job_input: IJobInput):
    execution_ids = job_input.execute_query("SELECT DISTINCT vdk_gdp_execution_id FROM product")
    print(execution_ids)

Configuration

Run vdk config-help - search for those prefixed with "GDP_EXECUTION_ID_" to see what configuration options are available.

Testing

Testing this plugin locally requires installing the dependencies listed in vdk-plugins/vdk-gdp-execution-id/requirements.txt

Run

pip install -r requirements.txt

Example

Find an example data job using vdk-gdp-execution-id plugin in examples/gdp-execution-id-example/.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vdk-gdp-execution-id-0.0.1190994517.tar.gz (3.9 kB view details)

Uploaded Source

File details

Details for the file vdk-gdp-execution-id-0.0.1190994517.tar.gz.

File metadata

File hashes

Hashes for vdk-gdp-execution-id-0.0.1190994517.tar.gz
Algorithm Hash digest
SHA256 a5d6497de37f7ffa44f9cfcb2486f7974c77ca16ce822206560600776a733c55
MD5 119c749b22ad51b370e7c65c4a7faffd
BLAKE2b-256 2c680b40583e2e755eafe19500250544b6f23aa5c460c63467ef744576ebb6c5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page