Skip to main content

A plugin to run Kedro pipelines on Databricks.

Project description

kedro-databricks

Rye Ruff License: MIT codecov Python Version PyPI Version

Kedro plugin to develop Kedro pipelines for Databricks. This plugin strives to provide the ultimate developer experience when using Kedro on Databricks. The plugin provides three main features:

  1. Initialization: Transform your local Kedro project into a Databricks Asset Bundle project with a single command.
  2. Generation: Generate Asset Bundle resources definition with a single command.
  3. Deployment: Deploy your Kedro project to Databricks with a single command.

Installation

To install the plugin, simply run:

pip install kedro-databricks

Now you can use the plugin to develop Kedro pipelines for Databricks.

How to get started

Prerequisites:

Before you begin, ensure that the Databricks CLI is installed and configured. For more information on installation and configuration, please refer to the Databricks CLI documentation.

Creating a new project

To create a project based on this starter, ensure you have installed Kedro into a virtual environment. Then use the following command:

pip install kedro

Soon you will be able to initialize the databricks-iris starter with the following command:

kedro new --starter="databricks-iris"

After the project is created, navigate to the newly created project directory:

cd <my-project-name>  # change directory

Install the required dependencies:

pip install -r requirements.txt
pip install kedro-databricks

Now you can nitialize the Databricks asset bundle

kedro databricks init

Next, generate the Asset Bundle resources definition:

kedro databricks bundle

Finally, deploy the Kedro project to Databricks:

kedro databricks deploy

That's it! Your pipelines have now been deployed as a workflow to Databricks as [dev <user>] <project_name>. Try running the workflow to see the results.

Commands

kedro databricks init

To initialize a Kedro project for Databricks, run:

kedro databricks init

This command will create the following files:

├── databricks.yml # Databricks Asset Bundle configuration
├── conf/
│   └── base/
│       └── databricks.yml # Workflow overrides

The databricks.yml file is the main configuration file for the Databricks Asset Bundle. The conf/base/databricks.yml file is used to override the Kedro workflow configuration for Databricks.

Override the Kedro workflow configuration for Databricks in the conf/base/databricks.yml file:

# conf/base/databricks.yml

default: # will be applied to all workflows
    job_clusters:
        - job_cluster_key: default
          new_cluster:
            spark_version: 7.3.x-scala2.12
            node_type_id: Standard_DS3_v2
            num_workers: 2
            spark_env_vars:
                KEDRO_LOGGING_CONFIG: /dbfs/FileStore/<package-name>/conf/logging.yml
    tasks: # will be applied to all tasks in each workflow
        - task_key: default
          job_cluster_key: default

<workflow-name>: # will only be applied to the workflow with the specified name
    job_clusters:
        - job_cluster_key: high-concurrency
          new_cluster:
            spark_version: 7.3.x-scala2.12
            node_type_id: Standard_DS3_v2
            num_workers: 2
            spark_env_vars:
                KEDRO_LOGGING_CONFIG: /dbfs/FileStore/<package-name>/conf/logging.yml
    tasks:
        - task_key: default # will be applied to all tasks in the specified workflow
          job_cluster_key: high-concurrency
        - task_key: <my-task> # will only be applied to the specified task in the specified workflow
          job_cluster_key: high-concurrency

The plugin loads all configuration named according to conf/databricks* or conf/databricks/*.

kedro databricks bundle

To generate Asset Bundle resources definition, run:

kedro databricks bundle

This command will generate the following files:

├── resources/
│   ├── <project>.yml # Asset Bundle resources definition corresponds to `kedro run`
│   └── <project-pipeline>.yml # Asset Bundle resources definition for each pipeline corresponds to `kedro run --pipeline <pipeline-name>`

The generated resources definition files are used to define the resources required to run the Kedro pipeline on Databricks.

kedro databricks deploy

To deploy a Kedro project to Databricks, run:

kedro databricks deploy

This command will deploy the Kedro project to Databricks. The deployment process includes the following steps:

  1. Package the Kedro project for a specfic environment
  2. Generate Asset Bundle resources definition for that environment
  3. Upload environment-specific /conf files to Databricks
  4. Upload /data/raw/* and ensure other /data directories are created
  5. Deploy Asset Bundle to Databricks

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kedro_databricks-0.5.0.tar.gz (24.2 kB view details)

Uploaded Source

Built Distribution

kedro_databricks-0.5.0-py3-none-any.whl (16.2 kB view details)

Uploaded Python 3

File details

Details for the file kedro_databricks-0.5.0.tar.gz.

File metadata

  • Download URL: kedro_databricks-0.5.0.tar.gz
  • Upload date:
  • Size: 24.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for kedro_databricks-0.5.0.tar.gz
Algorithm Hash digest
SHA256 5cffcdae5aee906bcf45f8b8b4b0e47472115a4d3ce0624b5f69d9a64022f409
MD5 cedf35809c2d9f0e59d66f4fee898ad1
BLAKE2b-256 42a234bf9cc44bb42c0abb21b56c24aa43d6530b11221583507626f4e78a72b5

See more details on using hashes here.

File details

Details for the file kedro_databricks-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for kedro_databricks-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3c66ec0c50b237c3ac70a2cdaf11377106b20da0f557c9a1bf56b717d93798a4
MD5 14a6367083883947b05fc716fdac7254
BLAKE2b-256 3f3843bf3f336cf4cb8da876ad06374eb0edd9941c4ec9f5300c41a9136015ad

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page