Skip to main content

Release for LinkedIn's changes to dbt-spark.

Project description

dbt logo

dbt

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.

dbt-spark

dbt-spark enables dbt to work with Apache Spark. For more information on using dbt with Spark, consult the docs.

Getting started

Review the repository README.md as most of that information pertains to dbt-spark.

Running locally

A docker-compose environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend. Note: dbt-spark now supports Spark 3.3.2.

The following command starts two docker containers:

docker-compose up -d

It will take a bit of time for the instance to start, you can check the logs of the two containers. If the instance doesn't start correctly, try the complete reset command listed below and then try start again.

Create a profile like this one:

spark_testing:
  target: local
  outputs:
    local:
      type: spark
      method: thrift
      host: 127.0.0.1
      port: 10000
      user: dbt
      schema: analytics
      connect_retries: 5
      connect_timeout: 60
      retry_all: true

Connecting to the local spark instance:

  • The Spark UI should be available at http://localhost:4040/sqlserver/
  • The endpoint for SQL-based testing is at http://localhost:10000 and can be referenced with the Hive or Spark JDBC drivers using connection string jdbc:hive2://localhost:10000 and default credentials dbt:dbt

Note that the Hive metastore data is persisted under ./.hive-metastore/, and the Spark-produced data under ./.spark-warehouse/. To completely reset you environment run the following:

docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/

Additional Configuration for MacOS

If installing on MacOS, use homebrew to install required dependencies.

brew install unixodbc

Contribute

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

in_dbt_spark-1.9.4.tar.gz (102.9 kB view details)

Uploaded Source

Built Distribution

in_dbt_spark-1.9.4-py3-none-any.whl (92.9 kB view details)

Uploaded Python 3

File details

Details for the file in_dbt_spark-1.9.4.tar.gz.

File metadata

  • Download URL: in_dbt_spark-1.9.4.tar.gz
  • Upload date:
  • Size: 102.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for in_dbt_spark-1.9.4.tar.gz
Algorithm Hash digest
SHA256 63ae3e08d4ad4eaa74e08a11bb40f5636dbc50cfa3d6f646a9ebff3c47f3edcb
MD5 264c41e087d76bd5879f1fca72ab75b5
BLAKE2b-256 518283adea5f208f4ee54c48607909097523780ab400306c9ac9586d6a86215d

See more details on using hashes here.

File details

Details for the file in_dbt_spark-1.9.4-py3-none-any.whl.

File metadata

  • Download URL: in_dbt_spark-1.9.4-py3-none-any.whl
  • Upload date:
  • Size: 92.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for in_dbt_spark-1.9.4-py3-none-any.whl
Algorithm Hash digest
SHA256 2af9c0326b6a265ca880c8b144f54a5a75a81e42e7c389dfaee73f0231d20e07
MD5 718c566520faa690d0cae1e993a9ec1c
BLAKE2b-256 8b1c0f362f44464843a1d8468dc5515953fe444897814ef569cc2731b4e870a3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page