Skip to main content

Database Tycoon — local-first analytics CLI that adapts to your existing data stack

Project description

tycoon

A pip-installable CLI that wires dlt → DuckDB → dbt → Rill into a working local analytics stack. No Docker, no cloud account required.

Adaptable: bring your own ingestion (Airbyte, Fivetran), warehouse (Snowflake, BigQuery, MotherDuck), dbt project, BI tool, or orchestrator — tycoon init asks before it assumes.


Install

Requires Python >= 3.12.

pip install database-tycoon
# or
uv add database-tycoon

Optional extras:

pip install "database-tycoon[dagster]"   # Dagster orchestration
pip install "database-tycoon[ask]"       # AI natural language queries (Ollama supported)

Quickstart

The fastest path to a working dashboard uses the PokéAPI — no credentials, no signup.

# 1. Create a project
mkdir my-project && cd my-project
tycoon init --template csv-import

# 2. Add the PokéAPI as a source (press Enter twice for defaults)
tycoon data sources add rest_api

# 3. Ingest into DuckDB
tycoon data sources run pokeapi

# 4. Scaffold dbt models and generate Rill dashboards
tycoon data analyze pokeapi --rill

# 5. Open Rill
tycoon start --only rill

Rill opens at http://localhost:9009 with pokemon, berry, and type tables ready to explore.

Already have a pipeline? tycoon init will ask about your ingestion tool, warehouse, dbt project, BI tool, and orchestrator — and configure itself around what you already have.


CLI Reference

Command Description
tycoon init Scaffold a new project
tycoon data sources catalog Browse available source integrations
tycoon data sources add <type> Register a new data source
tycoon data sources list List sources configured in this project
tycoon data sources list show <name> Show detailed config for a source
tycoon data sources run <name> Run ingestion for a named source
tycoon data sources run-all Run ingestion for all sources
tycoon data transform run Run dbt transformations
tycoon data analyze <source> Scaffold dbt staging models; add --rill to also generate dashboards
tycoon data db query <sql> Run a SQL query against the warehouse
tycoon data run-all Ingest all sources then run dbt build
tycoon data status Show freshness, row counts, and capture counts for each source
tycoon data history List recent dlt + dbt runs from the observability metadata DB
tycoon data history show <id> Per-run detail (per-table rows for dlt, per-node status for dbt)
tycoon start Start Rill, Dagster, Nao, and the web UI
tycoon stop Stop all services
tycoon ask chat Query your data in natural language (Nao)
tycoon run <tool> Passthrough to dbt, dlt, rill, dagster

tycoon.yml Reference

name: my-project
version: 0.1.0

database:
  raw: data/raw.duckdb              # dlt output (or md: URI for MotherDuck)
  warehouse: data/warehouse.duckdb  # dbt output — read by Rill and Nao

dbt_project_dir: dbt_project   # path to dbt project (yours or tycoon-scaffolded)
rill_dir: rill                 # path to Rill dashboard definitions

stack:                         # generated by tycoon init — edit as needed
  ingestion: dlt               # dlt | airbyte | fivetran | meltano | none
  ingestion_managed: true      # false = tycoon won't run `data sources run`
  warehouse: duckdb            # duckdb | motherduck | snowflake | bigquery | other
  transformation_managed: true # false = tycoon won't scaffold or overwrite dbt
  bi: rill                     # rill | metabase | looker | tableau | other | none
  bi_managed: true             # false = tycoon won't start Rill
  orchestrator: dagster        # dagster | airflow | prefect | other | none
  orchestrator_managed: true   # false = tycoon won't start Dagster

sources:
  my-github:
    type: github               # matches a catalog source name
    schema: raw_github         # schema name in the raw DuckDB file
    config:
      access_token: ${GITHUB_TOKEN}   # env vars are interpolated
      owner: my-org
      repo: my-repo

ask:                           # optional — requires tycoon[ask]
  llm:
    provider: ollama           # fully local, no API key required
  port: 5005

Each source produces its own raw DuckDB file: data/raw_<source>.duckdb. All sources write into data/warehouse.duckdb after transformation.

MotherDuck authentication

tycoon doctor recognizes two MotherDuck auth modes for a stack.warehouse: motherduck project:

  • MOTHERDUCK_TOKEN env var — use this for CI, Tower, Dagster, or any non-interactive path. Get one at app.motherduck.com/token.
  • Cached OAuth session — run duckdb -c "ATTACH 'md:'" once locally to authenticate via browser; DuckDB caches the token under ~/.duckdb/ and tycoon picks it up from there.

Catalog Sources

These sources are available via tycoon data sources add <name>. They are downloaded on demand via dlt init and not bundled in the package.

Source Category Key Tables
github Developer commits, issues, pull_requests, repositories
slack Communication channels, messages, users
stripe Finance customers, invoices, products, subscriptions
hubspot CRM companies, contacts, deals, tickets
notion Knowledge databases, pages, users

Data Directory

Raw DuckDB files follow the naming convention raw_<source>.duckdb (written by ingestion) while warehouse.duckdb is the single transformed database read by Rill and Nao. See data/README.md for details.


Rill Dashboards

Rill is a local-first BI tool. Dashboard definitions are YAML files in the rill/ directory. Launch Rill via tycoon start or tycoon start --only rill.

Auto-generate dashboards for a source after ingestion:

tycoon data analyze my-source --rill

This exports each raw table to Parquet, then generates Rill source, metrics view, and explore files — one dashboard per table. The --rill flag is opt-in; dashboard generation is skipped by default since it requires a Rill project directory (rill/) to already exist.

Architecture: sources read from Parquet via Rill's local_file connector into its built-in in-memory OLAP. Dashboards are immediately usable without a dbt run.


Observability (dlt + dbt run history)

Every tycoon data sources run mirrors dlt's load history into .tycoon/metadata.duckdb. Every tycoon data transform run/test/build parses target/run_results.json and records one row per invocation plus one per model/test. Both captures are best-effort — they never break the underlying command.

Peek at history from the terminal:

tycoon data history                  # last 20 runs across dlt + dbt
tycoon data history --tool dbt -n 50 # dbt-only, last 50
tycoon data history show deadbeef    # drill into a specific run (short prefix OK)

Or open the two Rill dashboards (_tycoon_dlt_usage, _tycoon_dbt_usage) that auto-appear alongside your per-source explores — success rate, duration, rows loaded, models built, tests passed/failed, all filterable by schema, table, command, and dbt version.

Query the metadata DB directly for anything the dashboards don't cover:

tycoon data query --db .tycoon/metadata.duckdb \
  "SELECT invocation_id, command, elapsed_s, success
   FROM dbt_runs ORDER BY started_at DESC LIMIT 10"

The metadata DB is disposable — delete .tycoon/metadata.duckdb to reset history.


Optional Extras

Dagster orchestration (tycoon[dagster])

Installs Dagster, dagster-dbt, and dagster-dlt. Provides a full asset graph covering ingestion and transformation. Run the Dagster UI with:

dagster dev

The workspace is defined in workspace.yaml at the project root.

AI queries (tycoon[ask])

Installs Nao and Ibis for natural language querying of the warehouse. Requires a running LLM — Ollama (local) is supported out of the box with no API key.

tycoon ask init
tycoon ask sync
tycoon ask chat

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

database_tycoon-0.1.2.tar.gz (307.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

database_tycoon-0.1.2-py3-none-any.whl (103.7 kB view details)

Uploaded Python 3

File details

Details for the file database_tycoon-0.1.2.tar.gz.

File metadata

  • Download URL: database_tycoon-0.1.2.tar.gz
  • Upload date:
  • Size: 307.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for database_tycoon-0.1.2.tar.gz
Algorithm Hash digest
SHA256 847238f548d76dc122c8b4207274fb8b7ea3b74f0359707bd17b25dc543e7fa2
MD5 2604447b7a55155a77c9b6c0bcc32554
BLAKE2b-256 f19032230b15d7af58106c300223dcb4b0bb1d985ef7e290c33fca5ef0a9386a

See more details on using hashes here.

Provenance

The following attestation bundles were made for database_tycoon-0.1.2.tar.gz:

Publisher: publish.yml on Database-Tycoon/tycoon-cli

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file database_tycoon-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: database_tycoon-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 103.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for database_tycoon-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5c8ecd23b17512a6b41f2e05f38a0ed4c6d83b7fe329e1a43dfd229dc9f57959
MD5 2848c4ba8c848a71ad52c8091a971e59
BLAKE2b-256 2e59516c2e1ae0ca90c5c771d1a318662a3a1eda44c5107a526d776466e37014

See more details on using hashes here.

Provenance

The following attestation bundles were made for database_tycoon-0.1.2-py3-none-any.whl:

Publisher: publish.yml on Database-Tycoon/tycoon-cli

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page