Metadata-driven ETL framework that unifies engines, stays cloud-agnostic, and is currently batch-first with a roadmap to micro-batch and streaming.
Project description
DataCoolie — Metadata-driven ETL Framework
Metadata-driven ETL framework that unifies execution engines (Spark, Polars, and more in the future), remains cloud-agnostic (Fabric, AWS, Databricks, and more in the future), and currently focuses on batch workloads with a roadmap to micro-batch and streaming.
What problem does it solve?
Data teams often prototype pipelines locally, then rewrite the same pipeline for Spark and again for each cloud runtime. That duplicates ETL code and makes operational behavior such as watermarks, schema hints, partitions, load strategies, and maintenance drift across environments.
DataCoolie solves this by separating pipeline intent from execution details. You define connections, dataflows, transforms, and operational controls as metadata, then run the same intent on Polars or Spark and on local, Fabric, Databricks, or AWS platforms.
Why it helps
- Metadata-driven — pipeline behavior lives in metadata instead of being re-implemented in each job.
- Right-sized compute — small and medium jobs can stay on lighter runtimes like Polars or local execution instead of paying Spark or cluster overhead too early.
- Portable — the same metadata can move to Spark and cloud platforms when workloads grow.
- Engine-unified — the same metadata runs on Spark and Polars; swap at runtime.
- Cloud-agnostic —
local,aws,fabric,databricksplatforms abstract file I/O and secrets. - Lakehouse-native — first-class Delta Lake and Apache Iceberg via
fmt="delta"/fmt="iceberg". - Operationally complete — watermarks, schema hints, partitions, load strategies, logging, and maintenance are built in.
- Plugin everything — engines, platforms, sources, destinations, transformers, and secret resolvers are all entry-point plugins.
Installation
# Core only
pip install datacoolie
# With Spark support (primary)
pip install datacoolie[spark]
# With Polars support
pip install datacoolie[polars]
# All engines
pip install datacoolie[all]
Quick Start
Install, save the script below as quickstart.py, and run it. Part A generates
a sample CSV + metadata.json; Part B runs the pipeline.
pip install "datacoolie[polars]"
# quickstart.py
# --- Part A: prepare sample data & metadata (stdlib only) --------------------
import json
from pathlib import Path
root = Path("dc_quickstart")
(root / "input" / "orders").mkdir(parents=True, exist_ok=True)
(root / "output").mkdir(parents=True, exist_ok=True)
(root / "input/orders/orders.csv").write_text(
"order_id,customer_id,amount\n1,100,19.99\n2,100,42.50\n3,101,7.25\n"
)
metadata = {
"connections": [
{"name": "csv_in", "connection_type": "file", "format": "csv",
"configure": {"base_path": str(root / "input"),
"read_options": {"header": "true", "inferSchema": "true"}}},
{"name": "parquet_out", "connection_type": "file", "format": "parquet",
"configure": {"base_path": str(root / "output")}},
],
"dataflows": [
{"name": "orders_csv_to_parquet", "stage": "bronze2silver",
"processing_mode": "batch",
"source": {"connection_name": "csv_in", "table": "orders"},
"destination": {"connection_name": "parquet_out", "table": "orders",
"load_type": "full_load"},
"transform": {}},
],
}
metadata_path = root / "metadata.json"
metadata_path.write_text(json.dumps(metadata, indent=2))
# --- Part B: run DataCoolie --------------------------------------------------
from datacoolie.engines.polars_engine import PolarsEngine
from datacoolie.platforms.local_platform import LocalPlatform
from datacoolie.metadata.file_provider import FileProvider
from datacoolie.orchestration.driver import DataCoolieDriver
platform = LocalPlatform()
engine = PolarsEngine(platform=platform)
provider = FileProvider(config_path=str(metadata_path), platform=platform)
with DataCoolieDriver(engine=engine, metadata_provider=provider) as driver:
result = driver.run(stage="bronze2silver")
print(f"Completed: {result.succeeded}/{result.total}")
python quickstart.py
Swap PolarsEngine for SparkEngine(spark, ...) or LocalPlatform() for
AwsPlatform / FabricPlatform / DatabricksPlatform — the metadata stays
the same.
Testbed & scenarios
See usecase-sim/README.md for a ready-made integration
testbed that exercises every {polars,spark} × {file,database,api} × {local,aws}
combination, plus lakehouse maintenance and a Docker-compose backend stack.
License
AGPL-3.0-or-later — free and open source.
See CONTRIBUTING.md for contribution terms.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file datacoolie-0.1.0.tar.gz.
File metadata
- Download URL: datacoolie-0.1.0.tar.gz
- Upload date:
- Size: 192.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
30b461d2239bc6acf47f65c9eb0265490c1464003ded1cd8149cf6935b13886d
|
|
| MD5 |
f8645af496124b2e64a9608bfeb89d2f
|
|
| BLAKE2b-256 |
d0bc6dafb8c44c6b5629d3aae825c9224b0fe8b6c69803ca2f2479e01a4dbd3c
|
Provenance
The following attestation bundles were made for datacoolie-0.1.0.tar.gz:
Publisher:
publish-pypi.yml on datacoolie/datacoolie
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
datacoolie-0.1.0.tar.gz -
Subject digest:
30b461d2239bc6acf47f65c9eb0265490c1464003ded1cd8149cf6935b13886d - Sigstore transparency entry: 1396090485
- Sigstore integration time:
-
Permalink:
datacoolie/datacoolie@7cf6de1ba57ab799c7cd9e380a0ab870cb8119f9 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/datacoolie
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@7cf6de1ba57ab799c7cd9e380a0ab870cb8119f9 -
Trigger Event:
push
-
Statement type:
File details
Details for the file datacoolie-0.1.0-py3-none-any.whl.
File metadata
- Download URL: datacoolie-0.1.0-py3-none-any.whl
- Upload date:
- Size: 230.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
316df2e4476ee851a776cfca72bc2670e3cb84b5ad87b574e26db8a3e027962a
|
|
| MD5 |
686deab2bcaa527254f1e76ae3d07ae9
|
|
| BLAKE2b-256 |
dd910faf9fe5a8bb717a86526830923b0b609371de9142217bd61c5cc63f26ec
|
Provenance
The following attestation bundles were made for datacoolie-0.1.0-py3-none-any.whl:
Publisher:
publish-pypi.yml on datacoolie/datacoolie
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
datacoolie-0.1.0-py3-none-any.whl -
Subject digest:
316df2e4476ee851a776cfca72bc2670e3cb84b5ad87b574e26db8a3e027962a - Sigstore transparency entry: 1396090486
- Sigstore integration time:
-
Permalink:
datacoolie/datacoolie@7cf6de1ba57ab799c7cd9e380a0ab870cb8119f9 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/datacoolie
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-pypi.yml@7cf6de1ba57ab799c7cd9e380a0ab870cb8119f9 -
Trigger Event:
push
-
Statement type: