Skip to main content

Synchronization tool for dbt models to Cube.js schemas and BI tools

Project description

dbt-cube-sync

A powerful synchronization tool that creates a seamless pipeline from dbt models to Cube.js schemas and BI tools (Superset, Tableau, PowerBI).

Features

  • ๐Ÿ”„ dbt โ†’ Cube.js: Auto-generate Cube.js schemas from dbt models with metrics
  • ๐Ÿ—ƒ๏ธ Flexible Data Type Source: Get column types from catalog OR directly from database via SQLAlchemy
  • ๐ŸŽฏ Model Filtering: Process specific models instead of all models
  • ๐Ÿ“Š Cube.js โ†’ BI Tools: Sync schemas to multiple BI platforms
  • ๐Ÿ—๏ธ Extensible Architecture: Plugin-based connector system for easy BI tool integration
  • ๐Ÿณ Docker Support: Containerized execution with orchestration support
  • ๐ŸŽฏ CLI Interface: Simple command-line tools for automation

Supported BI Tools

  • โœ… Apache Superset - Full implementation
  • ๐Ÿšง Tableau - Placeholder (coming soon)
  • ๐Ÿšง PowerBI - Placeholder (coming soon)

Installation

Using Poetry (Development)

cd dbt-cube-sync
poetry install
poetry run dbt-cube-sync --help

Database Drivers (for SQLAlchemy URI feature)

If you want to use the --sqlalchemy-uri option to fetch column types directly from your database, you'll need to install the appropriate database driver:

# PostgreSQL
poetry add psycopg2-binary

# MySQL
poetry add pymysql

# Snowflake
poetry add snowflake-sqlalchemy

# BigQuery
poetry add sqlalchemy-bigquery

# Redshift
poetry add sqlalchemy-redshift

Using Docker

docker build -t dbt-cube-sync .
docker run --rm dbt-cube-sync --help

Quick Start

1. Generate Cube.js Schemas from dbt

Option A: Using catalog file (traditional method)

dbt-cube-sync dbt-to-cube \
  --manifest ./target/manifest.json \
  --catalog ./target/catalog.json \
  --output ./cube_output

Option B: Using database connection (no catalog needed)

dbt-cube-sync dbt-to-cube \
  --manifest ./target/manifest.json \
  --sqlalchemy-uri postgresql://user:password@localhost:5432/mydb \
  --output ./cube_output

Option C: Filter specific models

dbt-cube-sync dbt-to-cube \
  --manifest ./target/manifest.json \
  --sqlalchemy-uri postgresql://user:password@localhost:5432/mydb \
  --models orders,customers,products \
  --output ./cube_output

2. Sync to BI Tool (Optional)

# Sync to Superset
dbt-cube-sync cube-to-bi superset \
  --cube-files ./cube_output \
  --url http://localhost:8088 \
  --username admin \
  --password admin \
  --cube-connection-name Cube

Configuration

Sample Configuration (sync-config.yaml)

connectors:
  superset:
    type: superset
    url: http://localhost:8088
    username: admin
    password: admin
    database_name: Cube
    
  tableau:
    type: tableau
    url: https://your-tableau-server.com
    username: your-username
    password: your-password
    
  powerbi:
    type: powerbi
    # PowerBI specific configuration

CLI Commands

Quick Reference

Command Description
sync-all Ultimate command - Incremental sync: dbt โ†’ Cube.js โ†’ Superset โ†’ RAG
dbt-to-cube Generate Cube.js schemas from dbt models (with incremental support)
cube-to-bi Sync Cube.js schemas to BI tools (Superset, Tableau, PowerBI)

sync-all (Recommended)

Ultimate incremental sync command - handles the complete pipeline with state tracking.

# Basic incremental sync (Cube.js only)
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output

# Full pipeline: dbt โ†’ Cube.js โ†’ Superset
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output \
  --superset-url http://localhost:8088 \
  --superset-username admin \
  --superset-password admin

# Full pipeline: dbt โ†’ Cube.js โ†’ Superset โ†’ RAG embeddings
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output \
  --superset-url http://localhost:8088 \
  --superset-username admin \
  --superset-password admin \
  --rag-api-url http://localhost:8000

# Force full rebuild (ignore state)
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output --force-full-sync

Options:

Option Required Description
--manifest, -m Yes Path to dbt manifest.json
--catalog, -c No* Path to dbt catalog.json
--sqlalchemy-uri, -s No* Database URI for column types
--output, -o Yes Output directory for Cube.js files
--state-path No State file path (default: .dbt-cube-sync-state.json)
--force-full-sync No Force full rebuild, ignore state
--superset-url No Superset URL
--superset-username No Superset username
--superset-password No Superset password
--cube-connection-name No Cube database name in Superset (default: Cube)
--rag-api-url No RAG API URL for embedding updates

*Either --catalog or --sqlalchemy-uri is required.

How Incremental Sync Works:

  1. Reads state file (.dbt-cube-sync-state.json) with model checksums
  2. Compares against current manifest to detect changes
  3. Only processes added or modified models
  4. Deletes Cube.js files for removed models
  5. Updates state file with new checksums

dbt-to-cube

Generate Cube.js schema files from dbt models with incremental support.

Options:

  • --manifest / -m: Path to dbt manifest.json file (required)
  • --catalog / -c: Path to dbt catalog.json file
  • --sqlalchemy-uri / -s: SQLAlchemy database URI for fetching column types
  • --models: Comma-separated list of model names to process
  • --output / -o: Output directory for Cube.js files (required)
  • --template-dir / -t: Directory containing Cube.js templates (default: ./cube/templates)
  • --state-path: State file for incremental sync (default: .dbt-cube-sync-state.json)
  • --force-full-sync: Force full regeneration, ignore cached state
  • --no-state: Disable state tracking (legacy behavior)

Examples:

# Incremental sync (default)
dbt-cube-sync dbt-to-cube -m manifest.json -c catalog.json -o output/

# Force full rebuild
dbt-cube-sync dbt-to-cube -m manifest.json -c catalog.json -o output/ --force-full-sync

# Using database connection (no catalog needed)
dbt-cube-sync dbt-to-cube -m manifest.json -s postgresql://user:pass@localhost/db -o output/

# Filter specific models
dbt-cube-sync dbt-to-cube -m manifest.json -c catalog.json -o output/ --models users,orders

cube-to-bi

Sync Cube.js schemas to BI tool datasets.

Arguments:

  • bi_tool: BI tool type (superset, tableau, powerbi)

Options:

  • --cube-files / -c: Directory containing Cube.js files (required)
  • --url / -u: BI tool URL (required)
  • --username / -n: BI tool username (required)
  • --password / -p: BI tool password (required)
  • --cube-connection-name / -d: Name of Cube database connection in BI tool (default: Cube)

Example:

dbt-cube-sync cube-to-bi superset -c cube_output/ -u http://localhost:8088 -n admin -p admin -d Cube

State File

The state file (.dbt-cube-sync-state.json) tracks:

{
  "version": "1.0",
  "last_sync_timestamp": "2024-01-15T10:30:00Z",
  "manifest_path": "/path/to/manifest.json",
  "models": {
    "model.project.users": {
      "checksum": "abc123...",
      "has_metrics": true,
      "last_generated": "2024-01-15T10:30:00Z",
      "output_file": "./cube_output/Users.js"
    }
  }
}

Delete this file to force a full rebuild, or use --force-full-sync.

Architecture

dbt models (with metrics) 
    โ†“
dbt-cube-sync generate-cubes
    โ†“
Cube.js schemas
    โ†“
dbt-cube-sync sync-bi [connector]
    โ†“
BI Tool Datasets (Superset/Tableau/PowerBI)

Project Structure

dbt-cube-sync/
โ”œโ”€โ”€ dbt_cube_sync/
โ”‚   โ”œโ”€โ”€ cli.py                 # CLI interface
โ”‚   โ”œโ”€โ”€ config.py             # Configuration management
โ”‚   โ”œโ”€โ”€ core/
โ”‚   โ”‚   โ”œโ”€โ”€ dbt_parser.py     # dbt manifest parser
โ”‚   โ”‚   โ”œโ”€โ”€ db_inspector.py   # Database column type inspector (SQLAlchemy)
โ”‚   โ”‚   โ”œโ”€โ”€ cube_generator.py # Cube.js generator
โ”‚   โ”‚   โ””โ”€โ”€ models.py         # Pydantic data models
โ”‚   โ””โ”€โ”€ connectors/
โ”‚       โ”œโ”€โ”€ base.py           # Abstract base connector
โ”‚       โ”œโ”€โ”€ superset.py       # Superset implementation
โ”‚       โ”œโ”€โ”€ tableau.py        # Tableau placeholder
โ”‚       โ””โ”€โ”€ powerbi.py        # PowerBI placeholder
โ”œโ”€โ”€ Dockerfile                # Container definition
โ”œโ”€โ”€ pyproject.toml            # Poetry configuration
โ””โ”€โ”€ README.md

Adding New BI Connectors

  1. Create a new connector class inheriting from BaseConnector
  2. Implement the required abstract methods
  3. Register the connector using ConnectorRegistry.register()

Example:

from .base import BaseConnector, ConnectorRegistry

class MyBIConnector(BaseConnector):
    def _validate_config(self):
        # Validation logic
        pass
    
    def connect(self):
        # Connection logic
        pass
    
    def sync_cube_schemas(self, cube_dir):
        # Sync implementation
        pass

# Register the connector
ConnectorRegistry.register('mybi', MyBIConnector)

Docker Integration

The tool is designed to work in containerized environments with proper dependency orchestration:

  1. dbt docs: Runs dbt build then serves documentation
  2. dbt-cube-sync: Runs sync pipeline after dbt and Cube.js are ready
  3. BI Tools: Receive synced datasets after sync completes

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Implement your changes
  4. Add tests if applicable
  5. Submit a pull request

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbt_cube_sync-0.1.0a13.tar.gz (27.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dbt_cube_sync-0.1.0a13-py3-none-any.whl (31.8 kB view details)

Uploaded Python 3

File details

Details for the file dbt_cube_sync-0.1.0a13.tar.gz.

File metadata

  • Download URL: dbt_cube_sync-0.1.0a13.tar.gz
  • Upload date:
  • Size: 27.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.10.19 Linux/6.11.0-1018-azure

File hashes

Hashes for dbt_cube_sync-0.1.0a13.tar.gz
Algorithm Hash digest
SHA256 c375fc1a00a0f41a8fcfbc55ced4bdf27673153369abc9f81b8fb2bcc407022a
MD5 8ddc0db02324c50beb050581efad7a34
BLAKE2b-256 6ec0ab3bed5847e3ac4b365ff6b480873e4ccec9066fe5de1c2a44a40b7361df

See more details on using hashes here.

File details

Details for the file dbt_cube_sync-0.1.0a13-py3-none-any.whl.

File metadata

  • Download URL: dbt_cube_sync-0.1.0a13-py3-none-any.whl
  • Upload date:
  • Size: 31.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.2 CPython/3.10.19 Linux/6.11.0-1018-azure

File hashes

Hashes for dbt_cube_sync-0.1.0a13-py3-none-any.whl
Algorithm Hash digest
SHA256 9a488050afbdb9a008c0a84db761154d5ee82d5dae893ad0c982da63d39b32eb
MD5 6301bd929b4879f55dc3a488b1d438b4
BLAKE2b-256 6f60c5c0764359b20fd047a86fd5266b5ccabb8af4a3f6f23d14e64f08c2fdd0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page