Skip to main content

IBM watsonx.data spark plugin for dbt

Project description

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.

dbt-watsonx-spark

The dbt-watsonx-spark package contains all of the code enabling dbt to work with IBM Spark on watsonx.data. Read the official documentation for using watsonx.data with dbt-watsonx-spark

Getting started

Installation

To install the dbt-watsonx-spark plugin, use pip:

$ pip install dbt-watsonx-spark

Configuration

Ensure you have started a query server from watsonx.data. Create an entry in your ~/.dbt/profiles.yml file using the following options:

  • You can view connection details by clicking on the three-dot menu for query server.
  • You can construct and configure the profile using the below template
  • You can copy your connection information details also from going to Configuration tab -> Connection Information -> Data Build Tool (DBT)
dbt_wxd:

  target: dev
  outputs:
    dev:
      type: watsonx_spark
      method: "http"
      
      # number of threads for DBT operations, refer: https://docs.getdbt.com/docs/running-a-dbt-project/using-threads
      threads: 1

      # value of 'schema' for an existing schema in Data Manager in watsonx.data or to create a new one in watsonx.data
      schema: '<wxd_schema>'
      
      # Hostname of your watsonx.data console (ex: us-south.lakehouse.cloud.ibm.com)
      host: https://<your-host>.com

      # URI of your query server running on watsonx.data
      uri: "/lakehouse/api/v2/spark_engines/<spark_engine_id>/sql_servers/<server_id>/connect/cliservice"
      
      # Catalog linked to your Spark engine within the query server
      catalog: "<wxd_catalog>"
      
      # Optional: Disable SSL verification
      use_ssl: false

      # Optional: Control automatic schema creation (default: true)
      # Set to false if schemas are managed externally (e.g., by Ops team)
      create_schemas: true

      # Optional: Control automatic LOCATION clause in CREATE TABLE (default: true)
      # Set to false if table locations are managed externally or to avoid permission issues
      auto_location: false

      auth:
        # In case of SaaS, set it as CRN of watsonx.data service
        # In case of Software, set it as instance id of watsonx.data
        instance: "<CRN/InstanceId>"
        
        # In case of SaaS, set it as your email id
        # In case of Software, set it as your username
        user: "<user@example.com/username>"

        # This must be your API Key
        apikey: "<apikey>"
        

Schema Creation Control

By default, dbt-watsonx-spark automatically creates schemas if they don't exist. However, in some environments where schema creation is managed by an operations team or through automation, you may want to disable this behavior.

You can control schema creation at three levels:

  1. Profile level (applies to all models in the profile):
dbt_wxd:
  target: dev
  outputs:
    dev:
      type: watsonx_spark
      # ... other settings ...
      create_schemas: false  # Disable automatic schema creation
  1. Project level (in dbt_project.yml):
models:
  my_project:
    +create_schemas: false  # Disable for all models in project
  1. Model level (in model config or dbt_project.yml):
# In model file
{{ config(create_schemas=false) }}

# Or in dbt_project.yml
models:
  my_project:
    my_folder:
      +create_schemas: false  # Disable for specific folder

When create_schemas is set to false, dbt will skip schema creation and assume the schema already exists. This is useful when:

  • Schemas are created by an external automation or Ops team
  • You want to enforce strict schema management policies
  • You need to prevent accidental schema creation in production environments

Table Location Control

By default, dbt-watsonx-spark automatically adds a LOCATION clause when creating tables based on the location_root configuration. However, in some environments where table locations are managed externally or to avoid S3 permission issues, you may want to disable this behavior.

You can control automatic location setting at three levels:

  1. Profile level (applies to all models in the profile):
dbt_wxd:
  target: dev
  outputs:
    dev:
      type: watsonx_spark
      # ... other settings ...
      auto_location: false  # Disable automatic LOCATION clause
  1. Project level (in dbt_project.yml):
models:
  my_project:
    +auto_location: false  # Disable for all models in project
  1. Model level (in model config or dbt_project.yml):
# In model file
{{ config(auto_location=false) }}

# Or in dbt_project.yml
models:
  my_project:
    my_folder:
      +auto_location: false  # Disable for specific folder

When auto_location is set to false, dbt will not add a LOCATION clause to CREATE TABLE statements, allowing the database to use its default location or a location specified by external schema management. This is useful when:

  • Table locations are managed by an external automation or Ops team
  • You want to avoid S3 permission issues related to specific paths
  • The schema already has a default location configured
  • You need to comply with strict data governance policies

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbt_watsonx_spark-0.0.1b1.tar.gz (47.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dbt_watsonx_spark-0.0.1b1-py3-none-any.whl (57.7 kB view details)

Uploaded Python 3

File details

Details for the file dbt_watsonx_spark-0.0.1b1.tar.gz.

File metadata

  • Download URL: dbt_watsonx_spark-0.0.1b1.tar.gz
  • Upload date:
  • Size: 47.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.19

File hashes

Hashes for dbt_watsonx_spark-0.0.1b1.tar.gz
Algorithm Hash digest
SHA256 64160326f25c98f445af5a11f25d17a665daf27be4c941671eddb86def0edea6
MD5 5179b860997efbc566c3b939b79fdff7
BLAKE2b-256 d7b2dcf80c55eda0270fa69941e1bd19f8a4dbb3195fbd0fb7107580a4263a2f

See more details on using hashes here.

File details

Details for the file dbt_watsonx_spark-0.0.1b1-py3-none-any.whl.

File metadata

File hashes

Hashes for dbt_watsonx_spark-0.0.1b1-py3-none-any.whl
Algorithm Hash digest
SHA256 1b8bfe5a0a9a57b52213045b4176265ea11ecd5fa9ebb7d8d1ee4f9711c7f90e
MD5 270b24e8d716d03c00406c904af888c3
BLAKE2b-256 a53f67095194b80d5539339cf482a2ea48d595fb4cb1b8410887a7dc37fc47c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page