Skip to main content

Create and publish Tableau Hyper files from Apache Spark DataFrames and Spark SQL.

Project description

hyperleaup

Pronounced "hyper-loop". Create and publish Tableau Hyper files from Apache Spark DataFrames or Spark SQL.

Why are data extracts are so slow?

Tableau Data Extracts can take hours to create and publish to a Tableau Server. Sometimes this means waiting around most of the day for the data extract to complete. What a waste of time! In addition, the Tableau Backgrounder (the Tableau Server job scheduler) becomes a single point of failure as more refresh jobs are scheduled and long running jobs exhaust the server’s resources.

Data Extract Current Workflow

How hyperleaup helps

Rather than pulling data from the source over an ODBC connection, hyperleaup can write data directly to a Hyper file and publish final Hyper files to a Tableau Server. Best of all, you can take advantage of all the benefits of Apache Spark + Tableau Hyper API:

  • perform efficient CDC upserts
  • distributed read/write/transformations from multiple sources
  • execute SQL directly

hyperleaup allows you to create repeatable data extracts that can be scheduled to run on a repeated frequency or even incorporate it as a final step in an ETL pipeline, e.g. refresh data extract with latest CDC.

Getting Started

A list of usage examples is available in the demo folder of this repo as a Databricks Notebook Archive (DBC).

Example usage

The following code snippet creates a Tableau Hyper file from a Spark SQL statement and publishes it as a datasource to a Tableau Server.

from hyperleaup import HyperFile

# Step 1: Create a Hyper File from Spark SQL
query = """
select *
  from transaction_history
 where action_date > '2015-01-01'
"""

hf = HyperFile(name="transaction_history", sql=query, is_dbfs_enabled=True)

# Step 2: Publish Hyper File to a Tableau Server
hf.publish(tableau_server_url,
           username,
           password,
           site_name,
           project_name,
           datasource_name)

# Step 3: Append new data
new_data = """
select *
  from transaction_history
 where action_date > last_publish_date
"""
hf.append(sql=new_data)

Hyper File Options

There is an optional HyperFileConfig that can be used to change default behaviors.

  • timestamp_with_timezone:
    • If True, use timestamptz datatype with HyperFile. Recommended if using timestamp values with Parquet create mode. (default=False)
  • allow_nulls:
    • If True, skip default behavior of replacing null numeric and strings with non-null values. (default=False)
  • convert_decimal_precision:
    • If True, automatically convert decimals with precision over 18 down to 18. This has risk of data truncation. (default=False)

Example using configs

from hyperleaup import HyperFile, HyperFileConfig

hf_config = HyperFileConfig(
              timestamp_with_timezone=True, 
              allow_nulls=False,
              convert_decimal_precision=False)

hf = HyperFile(name="transaction_history", sql=query, is_dbfs_enabled=True)

Legal Information

This software is provided as-is and is not officially supported by Databricks through customer technical support channels. Support, questions, and feature requests can be submitted through the Issues page of this repo. Please understand that issues with the use of this code will not be answered or investigated by Databricks Support.

Core Contribution team

  • Lead Developer: Will Girten, Lead SSA @Databricks
  • Puru Shrestha, Sr. BI Developer

Project Support

Please note that all projects in the /databrickslabs github account are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs).
They are provided AS-IS and we do not make any guarantees of any kind.
Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo.
They will be reviewed as time permits, but there are no formal SLAs for support.

Building the Project

To build the project:

python3 -m build

Running Pytests

To run tests on the project:

cd tests
python test_hyper_file.py
python test_creator.py

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hyperleaup-0.1.1.tar.gz (980.6 kB view details)

Uploaded Source

Built Distribution

hyperleaup-0.1.1-py3-none-any.whl (16.6 kB view details)

Uploaded Python 3

File details

Details for the file hyperleaup-0.1.1.tar.gz.

File metadata

  • Download URL: hyperleaup-0.1.1.tar.gz
  • Upload date:
  • Size: 980.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for hyperleaup-0.1.1.tar.gz
Algorithm Hash digest
SHA256 9619bae0f3acb1a983118279078ea769ec3ad3349ad681b7f9419e7fa239c552
MD5 94b3a0c11442e265be303268e7319d09
BLAKE2b-256 a9c9b9ce4b1cc7f2ccdcd9421ccb42bf8033793a98804a0547fe1e91ae3a80e1

See more details on using hashes here.

File details

Details for the file hyperleaup-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: hyperleaup-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 16.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for hyperleaup-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b613ec1ff7fb502d2b098a81b2bbcb9be6df6ce45d55914129e89683aee48315
MD5 46a7fc1bc3b2fd02b5b0c4fe60588981
BLAKE2b-256 32897ae4b96896545a892bb6bc935c45b5cec4514d81c7260eecdb9ad578e8b8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page