Skip to main content

A command line interface for Databricks Delta Live Tables

Project description

dltctl

Build badge
A CLI tool for fast local iteration on Delta Live Tables pipelines and rapid deployment

Installation

pip install dltctl

First-time Configuration

In order to function dltctl needs to know which Databricks workspace and which tokens/auth info to use. If you already use the databricks cli, dltctl will use whatever you've configured there. Otherwise, you can configure it using the same commands you would with the databricks CLI:

dltctl configure --jobs-api-version=2.1 --token

Usage

dltctl requires a configuration file in order to function. To generate one, run:

dltctl init mypipeline

That will generate a dltctl.yaml file in your current directory that looks like this:

pipeline_settings:
  channel: CURRENT
  clusters:
  - autoscale:
      max_workers: 5
      min_workers: 1
      mode: ENHANCED
    driver_node_type_id: c5.4xlarge
    label: default
    node_type_id: c5.4xlarge
  continuous: false
  development: true
  edition: advanced
  name: mypipeline
  photon: false

This is a minimally viable dlt project yaml file. For more advanced settings, edit the file directly or use flags:

dltctl init mypipeline -f -c '{"label":"default", "aws_attributes": {"instance_profile_arn":"myprofilearn"}}'

Now you just need to bring your own DLT pipeline.
Or if you just want to get started, you can try this:

echo "CREATE LIVE TABLE $(whoami | sed 's/\.//g')_dltctl_quickstart AS SELECT 1" > test.sql

Now you have the basics for a DLT pipeline deployment. You can deploy with dltctl like this:

dltctl deploy mypipeline

By default, dltctl uses a bunch of sane defaults to make getting started easy:

  • It will search your current working directory recursively for .py and .sql files and add them as libraries to your DLT pipeline. To override this behavior, use the --pipeline-files-dir argument and specify a different directory, or use the
  • It will use the same default pipeline configurations as the DLT UI
  • It will upload your pipeline files to the Databricks workspace and convert them to notebooks using the Import API. By default it automatically determines your username and will store them in your user directory. You can override this behavior by specifying a workspace target using the --workspace-path flag or with config file settings.
  • It will then create and start the pipeline based on settings in dltctl.yaml

Say you make changes to your code and want to restart your pipeline with your new version of the code, or with different pipeline settings. Simply update your pipeline settings (pipeline.json), save your code changes and run:

dltctl deploy

As an alternative, you could instead create a pipeline without starting it:

dltctl create

You can stage new files and settings to the workspace as often as you make changes:

dltctl stage

You can start and stop the pipeline manually:

dltctl start
dltctl stop

Or as before, you can still use dltctl deploy to combine all of these together.

dltctl deploy

You can trigger a full refresh using the -r flag:

dltctl start -r
dltctl deploy -r

If you don't want to watch the events, you can instead start or deploy as a job instead. You need to at least add a job_config parameter with a job name to your config file to do that. A minimally viable dltctl.yaml with job config looks like this:

job_config:
  name: mydltctljob
pipeline_settings:
  channel: CURRENT
  clusters:
  - autoscale:
      max_workers: 5
      min_workers: 1
      mode: ENHANCED
    driver_node_type_id: c5.4xlarge
    label: default
    node_type_id: c5.4xlarge
  continuous: false
  development: true
  edition: advanced
  photon: false
  name: mypipeline

Then you can run as a job instead:

dltctl deploy --as-job

Note that dltctl deploy and dltctl stage won't push changes and/or restart the pipeline if there aren't any changes. This means that adding a job config without changing anything else won't result in a job immediately created. You can force update though with the --force flag:

dltctl deploy --as-job --force

Or alternatively you can just start as a job since there are no other changes:

dltctl start --as-job

Here is an example of a more advanced dltctl.yaml:

pipeline_files_local_dir: .
pipeline_files_workspace_dir: /Users/foo@foo.com/dltctl_artifacts/nested_dir
job_config:
  name: foobk1234
  email_notifications:
    #on_start: [foo@foo.com]
    on_failure: [bar@bar.com]
  schedule:
    quartz_cron_expression: "0 0 12 * * ?"
    timezone_id: "America/Los_Angeles"
    pause_status: "UNPAUSED"
  tags:
    foo: bar
    bar: baz
pipeline_settings:
  channel: CURRENT
  clusters:
  - autoscale:
      max_workers: 4
      min_workers: 1
      mode: "ENHANCED"
    driver_node_type_id: c5.4xlarge
    label: default
    node_type_id: c5.4xlarge
    init_scripts:
    - dbfs:
        destination: dbfs:/bkvarda/init_scripts/datadog-install-driver-workers.sh
    spark_env_vars:
      DD_API_KEY: "{{secrets/bkvarda_dlt/dd_api_key}}"
      DD_ENV: dlt_test_pipeline
      DD_SITE: https://app.datadoghq.com
  continuous: true
  development: true
  edition: advanced
  name: foobk1234
  photon: false
  configuration:
    destination_table: "b"
    starting_offsets: "earliest"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dltctl-0.2.tar.gz (23.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dltctl-0.2-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file dltctl-0.2.tar.gz.

File metadata

  • Download URL: dltctl-0.2.tar.gz
  • Upload date:
  • Size: 23.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.2

File hashes

Hashes for dltctl-0.2.tar.gz
Algorithm Hash digest
SHA256 8647f85316bdbb8e95ae6f0e94231ace01bf10d7005a0f51d4e3ff821c0e52a4
MD5 f60be73592a55977ff4d50eca4108cf7
BLAKE2b-256 1db0fa693a25f70a3393ae0c4efa96a06c6ba6c45aa4e5d39433de4f118a78d4

See more details on using hashes here.

File details

Details for the file dltctl-0.2-py3-none-any.whl.

File metadata

  • Download URL: dltctl-0.2-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.2

File hashes

Hashes for dltctl-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1be9a16ed336651fda8ab53e976eceba6dee5edba7723be170b2944df1608bd5
MD5 9b82d80edeaea55f60ae9ee0f7b11a22
BLAKE2b-256 9ec33ff332e08482b6fb9549da45e1cd0ada72650f5ccb41e6fc79d9781951c2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page