Skip to main content

Extracts information from different systems and convert them to dbt exposures

Project description

dbt exposures crawler

PyPI version Tests Code checks codecov Apache License 2.0

Automates the creation of dbt exposures from different sources. Currently, only Tableau workbooks using Snowflake SQL are supported as a source.

This project is in an ALPHA stage. Internal and external APIs might change between minor versions.

Please reach out if you try this at your own organization. Feedback is very appreciated, and we would love to hear if you had any issues setting this up at your own.

Version compatibility

This project requires Python 3.8+. The latest published version of this package has been tested internally (at Voi) with dbt 1.x, Tableau Server 2023.1 and Snowflake SQL dialect.

Here is a summary of which versions from dbt and Tableau have been tested internally for recent versions of this package, but it doesn't mean that other dbt and Tableau versions will not work.

dbt-exposures-crawler dbt version Tableau version SQL dialect
0.1.4 - 0.1.5 1.1 - 1.4 Tableau Server 2022.1 - 2023.1 Snowflake

Installation

You can install the latest version of this package from PyPI by running the command below.

$ pip install dbt-exposures-crawler

Usage

Internally, we use this automation at Voi as part of our dbt docs release pipeline. We have a GitHub Action that does the following:

  1. Clone our dbt repository;
  2. Install dbt and our dependencies;
  3. Run dbt docs generate (it should be run against the production environment);
  4. Run this project (using the manifest.json generated from the previous command as input);
  5. Publish the generated documentation portal;

To run this project, we use:

$ python3 -m exposurescrawler.crawlers.tableau \
            --manifest-path=~path/to/dbt/target/manifest.json \
            --dbt-package-name="your_dbt_package_name" \
            --tableau-ignore-projects Archive \
            --verbose

Make sure you check the .env.example file to see which environment variables must be defined.

Project motivation

dbt is an open-source tool to manage data transformations in SQL. It automatically generates a documentation portal from your project which includes a dependency lineage graph. It is possible to add external downstream dependencies to this graph (such as a dashboard on a Business Intelligence tool) through a dbt feature called exposures, which are normally defined through yaml files.

This project automates the creation of exposures by implementing crawlers to parse the metadata of downstream tools. Currently, only Tableau dashboards are supported.

A few use cases on how having exposures can help:

  • analysts working on a model can use the exposures to perform impact analysis and see which reports might be impacted by their changes;
  • report consumers can find their report on dbt and see which models are used and read their documentation;
  • report consumers can find which other reports are using the same models as their favorite reports.

How it works

This Python package will, in summary:

  1. Retrieve dbt models and sources from manifest.json;
  2. Extract metadata (custom SQL and table references) from Tableau workbooks using their GraphQL API;
  3. Try to find occurrences of the dbt models and sources in the Tableau SQL;
  4. Use the Tableau REST API to retrieve additional information about the workbooks (author, project, etc);
  5. Create the dbt exposures (in-memory) and write them back to the manifest.json.

More in-depth explanation:

First, you must provide the path to a dbt project manifest. The metadata and fully qualified names (database, schema and object name) are extracted from all dbt models and sources represented in the manifest. The combination of dbt models and dbt sources will from now on be referred as dbt nodes.

Next, workbook metadata is extracted from Tableau using their Metadata API, including workbooks that use custom SQL queries and workbooks that don't (which are referred in this project as "native SQL" workbooks). Note that this API is included in all Tableau licenses (i.e. it does not require the Data Management Add-on), but must be manually enabled if you host your own Tableau Server.

The SQL from the custom SQL workbooks and the table names from the native SQL workbooks are normalized through simple heuristics, such as removing quotes and converting the custom SQL to lowercase. With the normalized SQL and normalized table names from Tableau, and the fully qualified names for the dbt nodes available, this package tries to find the occurrences of the latter in the former.

The result is a mapping of workbooks and which dbt nodes they depend on. For every workbook (with mapped dependencies available), extra metadata that was not available in the Metadata API is then retrieved from Tableau using their REST API, including when the workbook was created, when it was last updated, to which folder it belongs to on Tableau and information about its author.

As a final step, the information above is written back in the provided manifest.json in the form of exposures. Note that instead of generating .yaml files on disk for each exposure, the exposures are written directly on the manifest.json, which can then be used to publish the dbt documentation portal.

Example

To better understand how this package works, let's take as an example the jaffle_shop dbt sample project. It has, among other models, a customers and an orders model.

Now suppose that you company has 4 workbooks on Tableau:

  • Customers workbook: accesses the customers dbt model through custom SQL;
  • Company KPIs workbook: accesses both models through custom SQL;
  • Orders workbook: accesses the orders model without custom SQL;
  • Unrelated workbook: a workbook that does not use the dbt project but instead has a static data source.

When running this project, you would get the following console output:

CLI example

The manifest.json that you provided would have 3 new exposures added to it, such as:

modified manifest

Those exposures can then be visualized through your dbt documentation portal, either by finding which exposures are downstream dependencies of a certain model:

models in the documentation portal

Or by doing the inverse. Starting from an exposure, find which models are used on it:

exposures in the documentation portal

This example has been taken from the integration tests available in this project. You can read more in the Testing section below.

Features, assumptions and limitations

  • A folder name can be provided to ignore workbooks that belong to it. For example, if you have a folder called Archive on Tableau, you can pass --tableau-ignore-projects Archive to ignore all workbooks that belong to it;
  • For now, only Tableau workbooks (and not published data sources) are supported. Also, only Snowflake SQL is currently supported. Custom SQL must use Snowflake fully qualified names (i.e. database.schema.object);
  • Workbooks that are created under Tableau's Personal spaces are ignored (since they usually not governed nor production-ready).

Related projects

If you use Metabase, the dbt-metabase open-source project offers similar functionality to this project, but extracting exposures from Metabase dashboards instead of Tableau.

Development

Clone the repository and install it in editable mode:

$ pip install -e .

Before opening a pull request, make sure you run:

  • make lint: runs mypy, black and flake8;
  • make test: runs all tests

Architecture

The entry point for the crawlers should be on the crawlers module. For now, only Tableau is supported.

The tableau module contains all API clients (REST and GraphQL) and models.

The dbt module contains a model for representing a dbt exposure and utilities for parsing, interacting and saving dbt manifests.

Finally, the utils module has functions for logging and string parsing.

Testing

For the integration tests, we use a sample manifest.json as a fixture. It was manually generated from the jaffle_shop, an official dbt sample project. The following steps can be used to recreate it or to generate a new manifest on a more recent dbt version.

First, the dbt sample project should be cloned and dbt needs to be installed:

$ git clone https://github.com/fishtown-analytics/jaffle_shop
$ cd jaffle_shop
$ pipenv shell
$ pip install dbt-snowflake==1.4.4

Then, a new profiles.yml should be created in the same folder with the following configuration:

jaffle_shop:
  target: dev
  outputs:
    dev:
      type: snowflake
      database: sample_db
      account: ...
      user: ...
      private_key_path: ...
      private_key_passphrase: ...
      warehouse: ...
      role: ...
      schema: ...

Afterward, the project can be compiled by running:

$ dbt compile --target dev

The generated manifest.json should then be prettified and copied to the tests/_fixtures folder in this repository. You can also manually remove the macros entries from the file since they are not needed and they make the file much larger and harder to be manually inspected

$ cat target/manifest.json | jq > $PROJECT_ROOT/tests/_fixtures/manifest.json

Future ideas

  • Allow filters to be passed. E.g. only include Tableau workbooks with certain tags;
  • Add support to Tableau published data sources;

Contributing

We are open and would love to have contributions, both in Pull Requests but also in ideas and feedback. Don't hesitate to create an Issue on this repository if you are trying this project in your organization or have anything to share.

Release

There is a GitHub Action that will trigger a release of this package on PyPI based on releases created on GitHub. Steps:

  • Loosely follow semantic versioning
  • Remember to pretend the tag name with v
  • Use the tag name as the release title on GitHub
  • Use the auto-generated release notes from GitHub
  • Append a link at the end of the release notes to the released version on PyPI

License

This project is licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbt-exposures-crawler-0.1.5.tar.gz (6.0 MB view details)

Uploaded Source

Built Distribution

dbt_exposures_crawler-0.1.5-py3-none-any.whl (20.3 kB view details)

Uploaded Python 3

File details

Details for the file dbt-exposures-crawler-0.1.5.tar.gz.

File metadata

  • Download URL: dbt-exposures-crawler-0.1.5.tar.gz
  • Upload date:
  • Size: 6.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for dbt-exposures-crawler-0.1.5.tar.gz
Algorithm Hash digest
SHA256 1a592ea573bc9ca9def2ac6d0c343fa20a3fdbcc298e49d203b3fefd95d722e8
MD5 c6fe3d820af319c5194c72cb4da08902
BLAKE2b-256 e3ce42cc6b9bfc3ee0005aeb97679eec7104252ba61c82efd3c854fca6d5b925

See more details on using hashes here.

File details

Details for the file dbt_exposures_crawler-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for dbt_exposures_crawler-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 0a22edede831c64e8007308bf0381e95987127a7b760a6e44cc6b548a4b9e97a
MD5 b3d18d0fdf908acf20623307298ed802
BLAKE2b-256 e6ed43c9ff1fdd8aedebe84b3b017e296fcc7c45f39dd43a000b9b8a9df3a995

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page