Skip to main content

Configure and enforce conventions for your dbt project.

Project description

dbt-bouncer logo

dbt-bouncer

Configure and enforce conventions for your dbt project.


How to use

  1. Generate dbt artifacts by running a dbt command.
  2. Create a dbt-bouncer.yml config file, details here.
  3. Run dbt-bouncer to validate that your conventions are being maintained. You can use GitHub Actions, Docker, a .pex file or python to run dbt-bouncer.

GitHub Actions

steps:
    ...

    - uses: godatadriven/dbt-bouncer@v0
      with:
        config-file: ./<PATH_TO_CONFIG_FILE>
        output-file: results.json # optional, default does not save a results file
        send-pr-comment: true # optional, defaults to true

    ...

Docker

Don't use GitHub Actions? You can still use dbt-bouncer via Docker:

docker run --rm \
    --volume "$PWD":/app \
    ghcr.io/godatadriven/dbt-bouncer:vX.X.X \
    --config-file /app/<PATH_TO_CONFIG_FILE>

Pex

You can also run the .pex (Python EXecutable) artifact directly once you have a python executable (3.8 -> 3.12) installed:

wget https://github.com/godatadriven/dbt-bouncer/releases/download/vX.X.X/dbt-bouncer.pex -O dbt-bouncer.pex

python dbt-bouncer.pex --config-file $PWD/<PATH_TO_CONFIG_FILE>

Python

Install from pypi.org:

pip install dbt-bouncer

Run:

dbt-bouncer --config-file <PATH_TO_CONFIG_FILE>

Config file

dbt-bouncer requires a config file to be provided. This file configures what checks are run. Here is an example config file:

dbt_artifacts_dir: target # [Optional] Directory where the dbt artifacts exists, generally the `target` directory inside a dbt project. Defaults to `./target`.

manifest_checks:
  - name: check_macro_name_matches_file_name
  - name: check_model_names
    include: ^staging
    model_name_pattern: ^stg_

For more example config files, see here.

Note that the config can also be passed via a pyproject.toml file:

[tool.dbt-bouncer]
dbt_artifacts_dir = "target"

[[tool.dbt-bouncer.manifest_checks]]
name = "check_macro_name_matches_file_name"

[[tool.dbt-bouncer.manifest_checks]]
name = "check_model_names"
include = "^staging"
model_name_pattern = "^stg_"

Checks

:bulb: Click on a check name to see more details.

Catalog checks

These checks require the following artifact to be present:

  • catalog.json
  • manifest.json

Columns

Sources

Manifest checks

These checks require the following artifact to be present:

  • manifest.json

Exposures

Lineage

Macros

Metadata

Models

Sources

Tests

Run results checks

These checks require the following artifact to be present:

  • manifest.json
  • run_results.json

Results

Saving results to a file

It is possible to the outcome of a run, and associated metadata, to a .json file. This file will contain all the checks that were run, both failed checks and successful checks. This can be achieved by using the --output-file flag:

dbt-bouncer --config-file <PATH_TO_CONFIG_FILE> --output-file <PATH_TO_OUTPUT_FILE>

Reporting bugs and contributing code

  • Want to report a bug or request a feature? Let us know and open an issue
  • Want to help us build `dbt-bouncer? Check out the Contributing Guide

Code of Conduct

Everyone interacting in dbt-bouncer's codebase, issue trackers, chat rooms, and mailing lists is expected to follow the Code of Conduct.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbt_bouncer-0.19.1.tar.gz (27.1 kB view hashes)

Uploaded Source

Built Distribution

dbt_bouncer-0.19.1-py3-none-any.whl (33.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page