Skip to main content

Deploy scalable workflows to databricks using python

Project description

Brickflow

build codecov Code style: black Checked with mypy License PYPI version PYPI - Downloads PYPI - Python Version

BrickFlow is specifically designed to enable the development of Databricks workflows using Python, streamlining the process through a command-line interface (CLI) tool.


Contributors

Thanks to all the contributors who have helped ideate, develop and bring Brickflow to its current state.

Contributing

We're delighted that you're interested in contributing to our project! To get started, please carefully read and follow the guidelines provided in our contributing document.

Documentation

Brickflow documentation can be found here.

Getting Started

Prerequisites

  1. Install brickflows
pip install brickflows
  1. Install Databricks CLI
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sudo sh
  1. Configure Databricks cli with workspace token. This configures your ~/.databrickscfg file.
databricks configure --token

Hello World workflow

  1. Create your first workflow using brickflow
mkdir hello-world-brickflow
cd hello-world-brickflow
brickflow projects add
  1. Provide the following inputs
Project name: hello-world-brickflow
Path from repo root to project root (optional) [.]: .
Path from project root to workflows dir: workflows
Git https url: https://github.com/Nike-Inc/brickflow.git
Brickflow version [auto]:<hit enter>
Spark expectations version [0.5.0]: 0.8.0
Skip entrypoint [y/N]: N

Note: You can provide your own github repo url.

  1. Create a new file hello_world_wf.py in the workflows directory
touch workflows/hello_world_wf.py
  1. Copy the following code in hello_world_wf.py file
from brickflow import (
    ctx,
    Cluster,
    Workflow,
    NotebookTask,
)
from airflow.operators.bash import BashOperator


cluster = Cluster(
    name="job_cluster",
    node_type_id="m6gd.xlarge",
    spark_version="13.3.x-scala2.12",
    min_workers=1,
    max_workers=2,
)

wf = Workflow(
    "hello_world_workflow",
    default_cluster=cluster,
    tags={
        "product_id": "brickflow_demo",
    },
    common_task_parameters={
        "catalog": "<uc-catalog-name>",
        "database": "<uc-schema-name>",
    },
)

@wf.task
# this task does nothing but explains the use of context object
def start():
    print(f"Environment: {ctx.env}")

@wf.notebook_task
# this task runs a databricks notebook
def example_notebook():
    return NotebookTask(
        notebook_path="notebooks/example_notebook.py",
        base_parameters={
            "some_parameter": "some_value",  # in the notebook access these via dbutils.widgets.get("some_parameter")
        },
    )


@wf.task(depends_on=[start, example_notebook])
# this task runs a bash command
def list_lending_club_data_files():
    return BashOperator(
        task_id=list_lending_club_data_files.__name__,
        bash_command="ls -lrt /dbfs/databricks-datasets/samples/lending_club/parquet/",
    )

@wf.task(depends_on=list_lending_club_data_files)
# this task runs the pyspark code
def lending_data_ingest():
    ctx.spark.sql(
        f"""
        CREATE TABLE IF NOT EXISTS
        {ctx.dbutils_widget_get_or_else(key="catalog", debug="development")}.\
        {ctx.dbutils_widget_get_or_else(key="database", debug="dummy_database")}.\
        {ctx.dbutils_widget_get_or_else(key="brickflow_env", debug="local")}_lending_data_ingest
        USING DELTA -- this is default just for explicit purpose
        SELECT * FROM parquet.`dbfs:/databricks-datasets/samples/lending_club/parquet/`
    """
    )

Note: Modify the values of catalog/database for common_task_parameters.

  1. Create a new file example_notebook.py in the notebooks directory
mkdir notebooks
touch notebooks/example_notebook.py
  1. Copy the following code in the example_notebook.py file
# Databricks notebook source

print("hello world")

Deploy the workflow to databricks

brickflow projects deploy --project hello-world-brickflow -e local

Run the demo workflow

  1. Login to databricks workspace
  2. Go to the workflows and select the workflow

4. click on the run button

Examples

Refer to the examples for more examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brickflows-1.2.1.tar.gz (281.4 kB view details)

Uploaded Source

Built Distribution

brickflows-1.2.1-py3-none-any.whl (299.3 kB view details)

Uploaded Python 3

File details

Details for the file brickflows-1.2.1.tar.gz.

File metadata

  • Download URL: brickflows-1.2.1.tar.gz
  • Upload date:
  • Size: 281.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for brickflows-1.2.1.tar.gz
Algorithm Hash digest
SHA256 140230600ff57cea523fde2225cb38e5c3ccf9cba813467311b1c31c652cfdc4
MD5 3d806bed86e52c83ba0d47850529c0e3
BLAKE2b-256 50ed1c97d99ae8abb97120ffd2eed27d5df5fd974c8d34e1565a986278d50eea

See more details on using hashes here.

File details

Details for the file brickflows-1.2.1-py3-none-any.whl.

File metadata

  • Download URL: brickflows-1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 299.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for brickflows-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2dab2309f4c42bed89dc2951c3099ac97c634f21c5008127462783c5f5d4facc
MD5 9e73c6ec6cc7d63ab602bdbf5060ab99
BLAKE2b-256 7f5b3a5e3f1872765539a7ea53681d7ee3321c5d375f4f7f745ad066ee780f89

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page