Skip to main content

A temporary release for LinkedIn's changes to dbt-spark.

Project description

dbt logo

Unit Tests Badge Integration Tests Badge

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.

dbt-spark

The dbt-spark package contains all of the code enabling dbt to work with Apache Spark and Databricks. For more information, consult the docs.

Getting started

Running locally

A docker-compose environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend. Note: dbt-spark now supports Spark 3.1.1 (formerly on Spark 2.x).

The following command would start two docker containers

docker-compose up -d

It will take a bit of time for the instance to start, you can check the logs of the two containers. If the instance doesn't start correctly, try the complete reset command listed below and then try start again.

Create a profile like this one:

spark-testing:
  target: local
  outputs:
    local:
      type: spark
      method: thrift
      host: 127.0.0.1
      port: 10000
      user: dbt
      schema: analytics
      connect_retries: 5
      connect_timeout: 60
      retry_all: true

Connecting to the local spark instance:

  • The Spark UI should be available at http://localhost:4040/sqlserver/
  • The endpoint for SQL-based testing is at http://localhost:10000 and can be referenced with the Hive or Spark JDBC drivers using connection string jdbc:hive2://localhost:10000 and default credentials dbt:dbt

Note that the Hive metastore data is persisted under ./.hive-metastore/, and the Spark-produced data under ./.spark-warehouse/. To completely reset you environment run the following:

docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/

Reporting bugs and contributing code

  • Want to report a bug or request a feature? Open an issue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

in-dbt-spark-1.2.500.tar.gz (27.9 kB view details)

Uploaded Source

Built Distribution

in_dbt_spark-1.2.500-py3-none-any.whl (33.1 kB view details)

Uploaded Python 3

File details

Details for the file in-dbt-spark-1.2.500.tar.gz.

File metadata

  • Download URL: in-dbt-spark-1.2.500.tar.gz
  • Upload date:
  • Size: 27.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for in-dbt-spark-1.2.500.tar.gz
Algorithm Hash digest
SHA256 9935a593af203552303531b03e1bf244ed4251f2dc0f7ebf71b44da4c10eeb87
MD5 f39f816bc47278583255afa574b1b305
BLAKE2b-256 12b24e21fd405bd9cd67405031a6c6a5b2f67d74b53d7d503a63df8cfa8f7509

See more details on using hashes here.

File details

Details for the file in_dbt_spark-1.2.500-py3-none-any.whl.

File metadata

File hashes

Hashes for in_dbt_spark-1.2.500-py3-none-any.whl
Algorithm Hash digest
SHA256 f99ce603dd735079ab56a9934e4bc563547740d9a1e11ca7724d32ce0dd6e27b
MD5 4f41e05919055378941c2d2c0e27bb0c
BLAKE2b-256 88e9c170f8cc8b0ba273f2da8d9e68d68330e7c1fbc133fe6e63cd9cc6bc4e18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page