Skip to main content

Synchronization tool for dbt models to Cube.js schemas and BI tools

Project description

dbt-cube-sync

A powerful synchronization tool that creates a seamless pipeline from dbt models to Cube.js schemas and BI tools (Superset, Tableau, PowerBI).

Features

  • ๐Ÿ”„ dbt โ†’ Cube.js: Auto-generate Cube.js schemas from dbt models with metrics
  • ๐Ÿ—ƒ๏ธ Flexible Data Type Source: Get column types from catalog OR directly from database via SQLAlchemy
  • ๐ŸŽฏ Model Filtering: Process specific models instead of all models
  • ๐Ÿ“Š Cube.js โ†’ BI Tools: Sync schemas to multiple BI platforms
  • ๐Ÿ—๏ธ Extensible Architecture: Plugin-based connector system for easy BI tool integration
  • ๐Ÿณ Docker Support: Containerized execution with orchestration support
  • ๐ŸŽฏ CLI Interface: Simple command-line tools for automation

Supported BI Tools

  • โœ… Apache Superset - Full implementation
  • ๐Ÿšง Tableau - Placeholder (coming soon)
  • ๐Ÿšง PowerBI - Placeholder (coming soon)

Installation

Using Poetry (Development)

cd dbt-cube-sync
poetry install
poetry run dbt-cube-sync --help

Database Drivers (for SQLAlchemy URI feature)

If you want to use the --sqlalchemy-uri option to fetch column types directly from your database, you'll need to install the appropriate database driver:

# PostgreSQL
poetry add psycopg2-binary

# MySQL
poetry add pymysql

# Snowflake
poetry add snowflake-sqlalchemy

# BigQuery
poetry add sqlalchemy-bigquery

# Redshift
poetry add sqlalchemy-redshift

Using Docker

docker build -t dbt-cube-sync .
docker run --rm dbt-cube-sync --help

Quick Start

1. Generate Cube.js Schemas from dbt

Option A: Using catalog file (traditional method)

dbt-cube-sync dbt-to-cube \
  --manifest ./target/manifest.json \
  --catalog ./target/catalog.json \
  --output ./cube_output

Option B: Using database connection (no catalog needed)

dbt-cube-sync dbt-to-cube \
  --manifest ./target/manifest.json \
  --sqlalchemy-uri postgresql://user:password@localhost:5432/mydb \
  --output ./cube_output

Option C: Filter specific models

dbt-cube-sync dbt-to-cube \
  --manifest ./target/manifest.json \
  --sqlalchemy-uri postgresql://user:password@localhost:5432/mydb \
  --models orders,customers,products \
  --output ./cube_output

2. Sync to BI Tool (Optional)

# Sync to Superset
dbt-cube-sync cube-to-bi superset \
  --cube-files ./cube_output \
  --url http://localhost:8088 \
  --username admin \
  --password admin \
  --cube-connection-name Cube

Configuration

Sample Configuration (sync-config.yaml)

connectors:
  superset:
    type: superset
    url: http://localhost:8088
    username: admin
    password: admin
    database_name: Cube
    
  tableau:
    type: tableau
    url: https://your-tableau-server.com
    username: your-username
    password: your-password
    
  powerbi:
    type: powerbi
    # PowerBI specific configuration

CLI Commands

Quick Reference

Command Description
sync-all Ultimate command - Incremental sync: dbt โ†’ Cube.js โ†’ Superset โ†’ RAG
dbt-to-cube Generate Cube.js schemas from dbt models (with incremental support)
cube-to-bi Sync Cube.js schemas to BI tools (Superset, Tableau, PowerBI)

sync-all (Recommended)

Ultimate incremental sync command - handles the complete pipeline with state tracking.

# Basic incremental sync (Cube.js only)
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output

# Full pipeline: dbt โ†’ Cube.js โ†’ Superset
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output \
  --superset-url http://localhost:8088 \
  --superset-username admin \
  --superset-password admin

# Full pipeline: dbt โ†’ Cube.js โ†’ Superset โ†’ RAG embeddings
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output \
  --superset-url http://localhost:8088 \
  --superset-username admin \
  --superset-password admin \
  --rag-api-url http://localhost:8000

# Force full rebuild (ignore state)
dbt-cube-sync sync-all -m manifest.json -c catalog.json -o ./cube_output --force-full-sync

Options:

Option Required Description
--manifest, -m Yes Path to dbt manifest.json
--catalog, -c No* Path to dbt catalog.json
--sqlalchemy-uri, -s No* Database URI for column types
--output, -o Yes Output directory for Cube.js files
--state-path No State file path (default: .dbt-cube-sync-state.json)
--force-full-sync No Force full rebuild, ignore state
--superset-url No Superset URL
--superset-username No Superset username
--superset-password No Superset password
--cube-connection-name No Cube database name in Superset (default: Cube)
--rag-api-url No RAG API URL for embedding updates

*Either --catalog or --sqlalchemy-uri is required.

How Incremental Sync Works:

  1. Reads state file (.dbt-cube-sync-state.json) with model checksums
  2. Compares against current manifest to detect changes
  3. Only processes added or modified models
  4. Deletes Cube.js files for removed models
  5. Updates state file with new checksums

dbt-to-cube

Generate Cube.js schema files from dbt models with incremental support.

Options:

  • --manifest / -m: Path to dbt manifest.json file (required)
  • --catalog / -c: Path to dbt catalog.json file
  • --sqlalchemy-uri / -s: SQLAlchemy database URI for fetching column types
  • --models: Comma-separated list of model names to process
  • --output / -o: Output directory for Cube.js files (required)
  • --template-dir / -t: Directory containing Cube.js templates (default: ./cube/templates)
  • --state-path: State file for incremental sync (default: .dbt-cube-sync-state.json)
  • --force-full-sync: Force full regeneration, ignore cached state
  • --no-state: Disable state tracking (legacy behavior)

Examples:

# Incremental sync (default)
dbt-cube-sync dbt-to-cube -m manifest.json -c catalog.json -o output/

# Force full rebuild
dbt-cube-sync dbt-to-cube -m manifest.json -c catalog.json -o output/ --force-full-sync

# Using database connection (no catalog needed)
dbt-cube-sync dbt-to-cube -m manifest.json -s postgresql://user:pass@localhost/db -o output/

# Filter specific models
dbt-cube-sync dbt-to-cube -m manifest.json -c catalog.json -o output/ --models users,orders

cube-to-bi

Sync Cube.js schemas to BI tool datasets.

Arguments:

  • bi_tool: BI tool type (superset, tableau, powerbi)

Options:

  • --cube-files / -c: Directory containing Cube.js files (required)
  • --url / -u: BI tool URL (required)
  • --username / -n: BI tool username (required)
  • --password / -p: BI tool password (required)
  • --cube-connection-name / -d: Name of Cube database connection in BI tool (default: Cube)

Example:

dbt-cube-sync cube-to-bi superset -c cube_output/ -u http://localhost:8088 -n admin -p admin -d Cube

State File

The state file (.dbt-cube-sync-state.json) tracks:

{
  "version": "1.0",
  "last_sync_timestamp": "2024-01-15T10:30:00Z",
  "manifest_path": "/path/to/manifest.json",
  "models": {
    "model.project.users": {
      "checksum": "abc123...",
      "has_metrics": true,
      "last_generated": "2024-01-15T10:30:00Z",
      "output_file": "./cube_output/Users.js"
    }
  }
}

Delete this file to force a full rebuild, or use --force-full-sync.

Architecture

dbt models (with metrics) 
    โ†“
dbt-cube-sync generate-cubes
    โ†“
Cube.js schemas
    โ†“
dbt-cube-sync sync-bi [connector]
    โ†“
BI Tool Datasets (Superset/Tableau/PowerBI)

Project Structure

dbt-cube-sync/
โ”œโ”€โ”€ dbt_cube_sync/
โ”‚   โ”œโ”€โ”€ cli.py                 # CLI interface
โ”‚   โ”œโ”€โ”€ config.py             # Configuration management
โ”‚   โ”œโ”€โ”€ core/
โ”‚   โ”‚   โ”œโ”€โ”€ dbt_parser.py     # dbt manifest parser
โ”‚   โ”‚   โ”œโ”€โ”€ db_inspector.py   # Database column type inspector (SQLAlchemy)
โ”‚   โ”‚   โ”œโ”€โ”€ cube_generator.py # Cube.js generator
โ”‚   โ”‚   โ””โ”€โ”€ models.py         # Pydantic data models
โ”‚   โ””โ”€โ”€ connectors/
โ”‚       โ”œโ”€โ”€ base.py           # Abstract base connector
โ”‚       โ”œโ”€โ”€ superset.py       # Superset implementation
โ”‚       โ”œโ”€โ”€ tableau.py        # Tableau placeholder
โ”‚       โ””โ”€โ”€ powerbi.py        # PowerBI placeholder
โ”œโ”€โ”€ Dockerfile                # Container definition
โ”œโ”€โ”€ pyproject.toml            # Poetry configuration
โ””โ”€โ”€ README.md

Adding New BI Connectors

  1. Create a new connector class inheriting from BaseConnector
  2. Implement the required abstract methods
  3. Register the connector using ConnectorRegistry.register()

Example:

from .base import BaseConnector, ConnectorRegistry

class MyBIConnector(BaseConnector):
    def _validate_config(self):
        # Validation logic
        pass
    
    def connect(self):
        # Connection logic
        pass
    
    def sync_cube_schemas(self, cube_dir):
        # Sync implementation
        pass

# Register the connector
ConnectorRegistry.register('mybi', MyBIConnector)

Docker Integration

The tool is designed to work in containerized environments with proper dependency orchestration:

  1. dbt docs: Runs dbt build then serves documentation
  2. dbt-cube-sync: Runs sync pipeline after dbt and Cube.js are ready
  3. BI Tools: Receive synced datasets after sync completes

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Implement your changes
  4. Add tests if applicable
  5. Submit a pull request

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dbt_cube_sync-0.1.0a12.tar.gz (27.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dbt_cube_sync-0.1.0a12-py3-none-any.whl (31.4 kB view details)

Uploaded Python 3

File details

Details for the file dbt_cube_sync-0.1.0a12.tar.gz.

File metadata

  • Download URL: dbt_cube_sync-0.1.0a12.tar.gz
  • Upload date:
  • Size: 27.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.10.19 Linux/6.11.0-1018-azure

File hashes

Hashes for dbt_cube_sync-0.1.0a12.tar.gz
Algorithm Hash digest
SHA256 33dd3698988e6cef1597d270c7a1decdcf2c6fbe676269462befecb1eacddc78
MD5 fa1e394ab0720acde7b9dc43be887169
BLAKE2b-256 af32756c88898f8accc73f781b531400d9026672b5ee42fd01c7c936582b634a

See more details on using hashes here.

File details

Details for the file dbt_cube_sync-0.1.0a12-py3-none-any.whl.

File metadata

  • Download URL: dbt_cube_sync-0.1.0a12-py3-none-any.whl
  • Upload date:
  • Size: 31.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.3.1 CPython/3.10.19 Linux/6.11.0-1018-azure

File hashes

Hashes for dbt_cube_sync-0.1.0a12-py3-none-any.whl
Algorithm Hash digest
SHA256 fd1389f2e51195cff914397dca94551210c392afdf7ac2d8100275bf66db1e09
MD5 73a11ce993f3bc520e976cf3a4f5060f
BLAKE2b-256 7b8ad85d6ec3f6970b22405700c570e2b0dba8fea67b84a605aaac8bdfe0a00c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page