DLT-META Framework
Project description
DLT-META
Documentation | Release Notes | Examples
Project Overview
DLT-META is a metadata-driven framework designed to work with Lakeflow Declarative Pipelines. This framework enables the automation of bronze and silver data pipelines by leveraging metadata recorded in an onboarding JSON file. This file, known as the Dataflowspec, serves as the data flow specification, detailing the source and target metadata required for the pipelines.
In practice, a single generic pipeline reads the Dataflowspec and uses it to orchestrate and run the necessary data processing workloads. This approach streamlines the development and management of data pipelines, allowing for a more efficient and scalable data processing workflow
Lakeflow Declarative Pipelines and DLT-META are designed to complement each other. Lakeflow Declarative Pipelines provide a declarative, intent-driven foundation for building and managing data workflows, while DLT-META adds a powerful configuration-driven layer that automates and scales pipeline creation. By combining these approaches, teams can move beyond manual coding to achieve true enterprise-level agility, governance, and efficiency, templatizing and automating pipelines for any scale of modern data-driven business
Components:
Metadata Interface
- Capture input/output metadata in onboarding file
- Capture Data Quality Rules
- Capture processing logic as sql in Silver transformation file
Generic Lakeflow Declarative Pipeline
- Apply appropriate readers based on input metadata
- Apply data quality rules with Lakeflow Declarative Pipeline expectations
- Apply CDC apply changes if specified in metadata
- Builds Lakeflow Declarative Pipeline graph based on input/output metadata
- Launch Lakeflow Declarative pipeline
High-Level Process Flow:
Steps
DLT-META Lakeflow Declarative Pipelines Features support
| Features | DLT-META Support |
|---|---|
| Input data sources | Autoloader, Delta, Eventhub, Kafka, snapshot |
| Medallion architecture layers | Bronze, Silver |
| Custom transformations | Bronze, Silver layer accepts custom functions |
| Data Quality Expecations Support | Bronze, Silver layer |
| Quarantine table support | Bronze layer |
| create_auto_cdc_flow API support | Bronze, Silver layer |
| create_auto_cdc_from_snapshot_flow API support | Bronze layer |
| append_flow API support | Bronze layer |
| Liquid cluster support | Bronze, Bronze Quarantine, Silver tables |
| DLT-META CLI | databricks labs dlt-meta onboard, databricks labs dlt-meta deploy |
| Bronze and Silver pipeline chaining | Deploy dlt-meta pipeline with layer=bronze_silver option using default publishing mode |
| create_sink API support | Supported formats:external delta table , kafka Bronze, Silver layers |
| Databricks Asset Bundles | Supported |
| DLT-META UI | Uses Databricks Lakehouse DLT-META App |
Getting Started
Refer to the Getting Started
Databricks Labs DLT-META CLI lets you run onboard and deploy in interactive python terminal
pre-requisites:
-
Python 3.8.0 +
-
Databricks CLI v0.213 or later. See instructions
-
Install Databricks CLI on macOS:
-
-
Install Databricks CLI on Windows:
-
Once you install Databricks CLI, authenticate your current machine to a Databricks Workspace:
databricks auth login --host WORKSPACE_HOST
To enable debug logs, simply add `--debug` flag to any command.
Installing dlt-meta:
- Install dlt-meta via Databricks CLI:
databricks labs install dlt-meta
Onboard using dlt-meta CLI:
If you want to run existing demo files please follow these steps before running onboard command:
-
Clone dlt-meta:
git clone https://github.com/databrickslabs/dlt-meta.git
-
Navigate to project directory:
cd dlt-meta
-
Create Python virtual environment:
python -m venv .venv
-
Activate virtual environment:
source .venv/bin/activate
-
Install required packages:
# Core requirements pip install "PyYAML>=6.0" setuptools databricks-sdk # Development requirements pip install delta-spark==3.0.0 pyspark==3.5.5 pytest>=7.0.0 coverage>=7.0.0 # Integration test requirements pip install "typer[all]==0.6.1"
-
Set environment variables:
dlt_meta_home=$(pwd) export PYTHONPATH=$dlt_meta_home
- Run onboarding command:
databricks labs dlt-meta onboard
The command will prompt you to provide onboarding details. If you have cloned the dlt-meta repository, you can accept the default values which will use the configuration from the demo folder.
Above onboard cli command will:
- Push code and data to your Databricks workspace
- Create an onboarding job
- Display a success message:
Job created successfully. job_id={job_id}, url=https://{databricks workspace url}/jobs/{job_id} - Job URL will automatically open in your default browser.
depoly using dlt-meta CLI:
- Once onboarding jobs is finished deploy Lakeflow Declarative Pipeline using below command
-
databricks labs dlt-meta deploy
The command will prompt you to provide pipeline configuration details.
Above deploy cli command will:
- Deploy Lakeflow Declarative Pipeline with dlt-meta configuration like
layer,group,dataflowSpec table detailsetc to your databricks workspace - Display message:
dlt-meta pipeline={pipeline_id} created and launched with update_id={pipeline_update_id}, url=https://{databricks workspace url}/#joblist/pipelines/{pipeline_id} - Pipline URL will automatically open in your defaul browser.
More questions
Refer to the FAQ and DLT-META documentation
Project Support
Please note that all projects released under Databricks Labs
are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements
(SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket
relating to any issues arising from the use of these projects.
Any issues discovered through the use of this project should be filed as issues on the Github Repo.
They will be reviewed as time permits, but there are no formal SLAs for support.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dlt_meta-0.0.10-py3-none-any.whl.
File metadata
- Download URL: dlt_meta-0.0.10-py3-none-any.whl
- Upload date:
- Size: 54.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
860ee66b8bb15c9019ac602acb1a09ddb03fea3ebc6a54d47e393754c67485bf
|
|
| MD5 |
c3caceacf0e4c77b15a75dd6ac0c767d
|
|
| BLAKE2b-256 |
cad98ae702cc38c7b1d97b6f484b44e5a99b6cf01280205c9a06a7d8932f60ca
|
Provenance
The following attestation bundles were made for dlt_meta-0.0.10-py3-none-any.whl:
Publisher:
release.yml on databrickslabs/dlt-meta
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
dlt_meta-0.0.10-py3-none-any.whl -
Subject digest:
860ee66b8bb15c9019ac602acb1a09ddb03fea3ebc6a54d47e393754c67485bf - Sigstore transparency entry: 524439847
- Sigstore integration time:
-
Permalink:
databrickslabs/dlt-meta@46907611af28fd193de96cbe203eb79a8cca5baa -
Branch / Tag:
refs/tags/v0.0.10 - Owner: https://github.com/databrickslabs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@46907611af28fd193de96cbe203eb79a8cca5baa -
Trigger Event:
push
-
Statement type: