Skip to main content

Release for LinkedIn's changes to dbt-spark.

Project description

dbt logo

dbt

dbt enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.

dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.

dbt-spark

dbt-spark enables dbt to work with Apache Spark. For more information on using dbt with Spark, consult the docs.

Getting started

Review the repository README.md as most of that information pertains to dbt-spark.

Running locally

A docker-compose environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend. Note: dbt-spark now supports Spark 3.3.2.

The following command starts two docker containers:

docker-compose up -d

It will take a bit of time for the instance to start, you can check the logs of the two containers. If the instance doesn't start correctly, try the complete reset command listed below and then try start again.

Create a profile like this one:

spark_testing:
  target: local
  outputs:
    local:
      type: spark
      method: thrift
      host: 127.0.0.1
      port: 10000
      user: dbt
      schema: analytics
      connect_retries: 5
      connect_timeout: 60
      retry_all: true

Connecting to the local spark instance:

  • The Spark UI should be available at http://localhost:4040/sqlserver/
  • The endpoint for SQL-based testing is at http://localhost:10000 and can be referenced with the Hive or Spark JDBC drivers using connection string jdbc:hive2://localhost:10000 and default credentials dbt:dbt

Note that the Hive metastore data is persisted under ./.hive-metastore/, and the Spark-produced data under ./.spark-warehouse/. To completely reset you environment run the following:

docker-compose down
rm -rf ./.hive-metastore/
rm -rf ./.spark-warehouse/

Additional Configuration for MacOS

If installing on MacOS, use homebrew to install required dependencies.

brew install unixodbc

Contribute

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

in_dbt_spark-1.9.2.tar.gz (94.9 kB view details)

Uploaded Source

Built Distribution

in_dbt_spark-1.9.2-py3-none-any.whl (91.1 kB view details)

Uploaded Python 3

File details

Details for the file in_dbt_spark-1.9.2.tar.gz.

File metadata

  • Download URL: in_dbt_spark-1.9.2.tar.gz
  • Upload date:
  • Size: 94.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for in_dbt_spark-1.9.2.tar.gz
Algorithm Hash digest
SHA256 e2b5d71f78f2e5e7019ffc6bbe89aabb3ebcbaff1d707d756cd00df20cc775db
MD5 95856afab061a51f9a2f36e39728316f
BLAKE2b-256 7fe3b8c9a308e036bd475dc12e64584c24373e912503e069d34b68bc7ba290a8

See more details on using hashes here.

File details

Details for the file in_dbt_spark-1.9.2-py3-none-any.whl.

File metadata

  • Download URL: in_dbt_spark-1.9.2-py3-none-any.whl
  • Upload date:
  • Size: 91.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.6

File hashes

Hashes for in_dbt_spark-1.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 b3ed32913a85ee0b754740c50515df6673921154767492d17f5bf1efa5245f8c
MD5 d8145943389d4878bb89bd72471b1c50
BLAKE2b-256 f72ed04a6ed42ab47e1aba7750e5f75a5c10203761e417cbbe2276b529852bfc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page