Skip to main content

Metadata-driven ETL framework that unifies engines, stays cloud-agnostic, and is currently batch-first with a roadmap to micro-batch and streaming.

Project description

DataCoolie banner

DataCoolie — Metadata-driven ETL Framework

Metadata-driven ETL framework that unifies execution engines (Spark, Polars, and more in the future), remains cloud-agnostic (Fabric, AWS, Databricks, and more in the future), and currently focuses on batch workloads with a roadmap to micro-batch and streaming.

What problem does it solve?

Data teams often prototype pipelines locally, then rewrite the same pipeline for Spark and again for each cloud runtime. That duplicates ETL code and makes operational behavior such as watermarks, schema hints, partitions, load strategies, and maintenance drift across environments.

DataCoolie solves this by separating pipeline intent from execution details. You define connections, dataflows, transforms, and operational controls as metadata, then run the same intent on Polars or Spark and on local, Fabric, Databricks, or AWS platforms.

Why it helps

  • Metadata-driven — pipeline behavior lives in metadata instead of being re-implemented in each job.
  • Right-sized compute — small and medium jobs can stay on lighter runtimes like Polars or local execution instead of paying Spark or cluster overhead too early.
  • Portable — the same metadata can move to Spark and cloud platforms when workloads grow.
  • Engine-unified — the same metadata runs on Spark and Polars; swap at runtime.
  • Cloud-agnosticlocal, aws, fabric, databricks platforms abstract file I/O and secrets.
  • Lakehouse-native — first-class Delta Lake and Apache Iceberg via fmt="delta" / fmt="iceberg".
  • Operationally complete — watermarks, schema hints, partitions, load strategies, logging, and maintenance are built in.
  • Plugin everything — engines, platforms, sources, destinations, transformers, and secret resolvers are all entry-point plugins.

Installation

# Core only
pip install datacoolie

# With Spark support (primary)
pip install datacoolie[spark]

# With Polars support
pip install datacoolie[polars]

# All engines
pip install datacoolie[all]

Quick Start

Install, save the script below as quickstart.py, and run it. Part A generates a sample CSV + metadata.json; Part B runs the pipeline.

pip install "datacoolie[polars]"
# quickstart.py
# --- Part A: prepare sample data & metadata (stdlib only) --------------------
import json
from pathlib import Path

root = Path("dc_quickstart")
(root / "input" / "orders").mkdir(parents=True, exist_ok=True)
(root / "output").mkdir(parents=True, exist_ok=True)

(root / "input/orders/orders.csv").write_text(
    "order_id,customer_id,amount\n1,100,19.99\n2,100,42.50\n3,101,7.25\n"
)

metadata = {
    "connections": [
        {"name": "csv_in", "connection_type": "file", "format": "csv",
         "configure": {"base_path": str(root / "input"),
                "read_options": {"header": "true", "inferSchema": "true"}}},
        {"name": "parquet_out", "connection_type": "file", "format": "parquet",
         "configure": {"base_path": str(root / "output")}},
    ],
    "dataflows": [
        {"name": "orders_csv_to_parquet", "stage": "bronze2silver",
         "processing_mode": "batch",
         "source": {"connection_name": "csv_in", "table": "orders"},
         "destination": {"connection_name": "parquet_out", "table": "orders",
                         "load_type": "full_load"},
         "transform": {}},
    ],
}
metadata_path = root / "metadata.json"
metadata_path.write_text(json.dumps(metadata, indent=2))

# --- Part B: run DataCoolie --------------------------------------------------
from datacoolie.engines.polars_engine import PolarsEngine
from datacoolie.platforms.local_platform import LocalPlatform
from datacoolie.metadata.file_provider import FileProvider
from datacoolie.orchestration.driver import DataCoolieDriver

platform = LocalPlatform()
engine = PolarsEngine(platform=platform)
provider = FileProvider(config_path=str(metadata_path), platform=platform)

with DataCoolieDriver(engine=engine, metadata_provider=provider) as driver:
    result = driver.run(stage="bronze2silver")
    print(f"Completed: {result.succeeded}/{result.total}")
python quickstart.py

Swap PolarsEngine for SparkEngine(spark, ...) or LocalPlatform() for AwsPlatform / FabricPlatform / DatabricksPlatform — the metadata stays the same.

Testbed & scenarios

See usecase-sim/README.md for a ready-made integration testbed that exercises every {polars,spark} × {file,database,api} × {local,aws} combination, plus lakehouse maintenance and a Docker-compose backend stack.

License

AGPL-3.0-or-later — free and open source.

See CONTRIBUTING.md for contribution terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datacoolie-0.1.0.tar.gz (192.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

datacoolie-0.1.0-py3-none-any.whl (230.2 kB view details)

Uploaded Python 3

File details

Details for the file datacoolie-0.1.0.tar.gz.

File metadata

  • Download URL: datacoolie-0.1.0.tar.gz
  • Upload date:
  • Size: 192.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for datacoolie-0.1.0.tar.gz
Algorithm Hash digest
SHA256 30b461d2239bc6acf47f65c9eb0265490c1464003ded1cd8149cf6935b13886d
MD5 f8645af496124b2e64a9608bfeb89d2f
BLAKE2b-256 d0bc6dafb8c44c6b5629d3aae825c9224b0fe8b6c69803ca2f2479e01a4dbd3c

See more details on using hashes here.

Provenance

The following attestation bundles were made for datacoolie-0.1.0.tar.gz:

Publisher: publish-pypi.yml on datacoolie/datacoolie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file datacoolie-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: datacoolie-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 230.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for datacoolie-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 316df2e4476ee851a776cfca72bc2670e3cb84b5ad87b574e26db8a3e027962a
MD5 686deab2bcaa527254f1e76ae3d07ae9
BLAKE2b-256 dd910faf9fe5a8bb717a86526830923b0b609371de9142217bd61c5cc63f26ec

See more details on using hashes here.

Provenance

The following attestation bundles were made for datacoolie-0.1.0-py3-none-any.whl:

Publisher: publish-pypi.yml on datacoolie/datacoolie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page