A Swiss Army knife for simple ETL operations
Project description
ETLPlus
ETLPlus is a veritable Swiss Army knife for enabling simple ETL operations, offering both a Python package and command-line interface for data extraction, validation, transformation, and loading.
- ETLPlus
Features
-
Extract data from multiple sources:
- Files (CSV, JSON, XML, YAML)
- Databases (connection string support)
- REST APIs (GET)
-
Validate data with flexible rules:
- Type checking
- Required fields
- Value ranges (min/max)
- String length constraints
- Pattern matching
- Enum validation
-
Transform data with powerful operations:
- Filter records
- Map/rename fields
- Select specific fields
- Sort data
- Aggregate functions (avg, count, max, min, sum)
-
Load data to multiple targets:
- Files (CSV, JSON, XML, YAML)
- Databases (connection string support)
- REST APIs (PATCH, POST, PUT)
Installation
pip install etlplus
For development:
pip install -e ".[dev]"
Quickstart
Get up and running in under a minute.
# Inspect help and version
etlplus --help
etlplus --version
# One-liner: extract CSV, filter, select, and write JSON
etlplus extract file examples/data/sample.csv \
| etlplus transform - --operations '{"filter": {"field": "age", "op": "gt", "value": 25}, "select": ["name", "email"]}' \
-o temp/sample_output.json
from etlplus import extract, transform, validate, load
data = extract("file", "input.csv")
ops = {"filter": {"field": "age", "op": "gt", "value": 25}, "select": ["name", "email"]}
filtered = transform(data, ops)
rules = {"name": {"type": "string", "required": True}, "email": {"type": "string", "required": True}}
assert validate(filtered, rules)["valid"]
load(filtered, "file", "temp/sample_output.json", file_format="json")
Usage
Command Line Interface
ETLPlus provides a powerful CLI for ETL operations:
# Show help
etlplus --help
# Show version
etlplus --version
Extract Data
Note: For file sources, the format is inferred from the filename extension; the --format option is
ignored. To treat passing --format as an error for file sources, either set
ETLPLUS_FORMAT_BEHAVIOR=error or pass the CLI flag --strict-format.
Extract from JSON file:
etlplus extract file examples/data/sample.json
Extract from CSV file:
etlplus extract file examples/data/sample.csv
Extract from XML file:
etlplus extract file examples/data/sample.xml
Extract from REST API:
etlplus extract api https://api.example.com/data
Save extracted data to file:
etlplus extract file examples/data/sample.csv -o temp/sample_output.json
Validate Data
Validate data from file or JSON string:
etlplus validate '{"name": "John", "age": 30}' --rules '{"name": {"type": "string", "required": true}, "age": {"type": "number", "min": 0, "max": 150}}'
Validate from file:
etlplus validate examples/data/sample.json --rules '{"email": {"type": "string", "pattern": "^[\\w.-]+@[\\w.-]+\\.\\w+$"}}'
Transform Data
Filter and select fields:
etlplus transform '[{"name": "John", "age": 30}, {"name": "Jane", "age": 25}]' \
--operations '{"filter": {"field": "age", "op": "gt", "value": 26}, "select": ["name"]}'
Sort data:
etlplus transform examples/data/sample.json --operations '{"sort": {"field": "age", "reverse": true}}'
Aggregate data:
etlplus transform examples/data/sample.json --operations '{"aggregate": {"field": "age", "func": "sum"}}'
Map/rename fields:
etlplus transform examples/data/sample.json --operations '{"map": {"name": "new_name"}}'
Load Data
Load to JSON file:
etlplus load '{"name": "John", "age": 30}' file temp/sample_output.json
Load to CSV file:
etlplus load '[{"name": "John", "age": 30}]' file temp/sample_output.csv
Load to REST API:
etlplus load examples/data/sample.json api https://api.example.com/endpoint
Python API
Use ETLPlus as a Python library:
from etlplus import extract, validate, transform, load
# Extract data
data = extract("file", "data.json")
# Validate data
validation_rules = {
"name": {"type": "string", "required": True},
"age": {"type": "number", "min": 0, "max": 150}
}
result = validate(data, validation_rules)
if result["valid"]:
print("Data is valid!")
# Transform data
operations = {
"filter": {"field": "age", "op": "gt", "value": 18},
"select": ["name", "email"]
}
transformed = transform(data, operations)
# Load data
load(transformed, "file", "temp/sample_output.json", format="json")
For YAML-driven pipelines executed end-to-end (extract → validate → transform → load), see:
- Authoring:
docs/pipeline-guide.md - Runner API and internals:
docs/run-module.md
Complete ETL Pipeline Example
# 1. Extract from CSV
etlplus extract file examples/data/sample.csv -o temp/sample_extracted.json
# 2. Transform (filter and select fields)
etlplus transform temp/sample_extracted.json \
--operations '{"filter": {"field": "age", "op": "gt", "value": 25}, "select": ["name", "email"]}' \
-o temp/sample_transformed.json
# 3. Validate transformed data
etlplus validate temp/sample_transformed.json \
--rules '{"name": {"type": "string", "required": true}, "email": {"type": "string", "required": true}}'
# 4. Load to CSV
etlplus load temp/sample_transformed.json file temp/sample_output.csv
Environment Variables
ETLPlus honors a small number of environment toggles to refine CLI behavior:
ETLPLUS_FORMAT_BEHAVIOR: controls what happens when--formatis provided for file sources or targets (extract/load) where the format is inferred from the filename extension.error|fail|strict: treat as error (non-zero exit)warn(default): print a warning to stderrignore|silent: no message
- Precedence: the CLI flag
--strict-formatoverrides the environment.
Examples (zsh):
# Warn (default)
etlplus extract file data.csv --format csv
etlplus load data.json file out.csv --format csv
# Enforce error via environment
ETLPLUS_FORMAT_BEHAVIOR=error \
etlplus extract file data.csv --format csv
ETLPLUS_FORMAT_BEHAVIOR=error \
etlplus load data.json file out.csv --format csv
# Equivalent strict behavior via flag (overrides environment)
etlplus extract file data.csv --format csv --strict-format
etlplus load data.json file out.csv --format csv --strict-format
# Recommended: rely on extension, no --format needed for files
etlplus extract file data.csv
etlplus load data.json file out.csv
Transformation Operations
Filter Operations
Supported operators:
eq: Equalne: Not equalgt: Greater thangte: Greater than or equallt: Less thanlte: Less than or equalin: Value in listcontains: List/string contains value
Example:
{
"filter": {
"field": "status",
"op": "in",
"value": ["active", "pending"]
}
}
Aggregation Functions
Supported functions:
sum: Sum of valuesavg: Average of valuesmin: Minimum valuemax: Maximum valuecount: Count of values
Example:
{
"aggregate": {
"field": "revenue",
"func": "sum"
}
}
Validation Rules
Supported validation rules:
type: Data type (string, number, integer, boolean, array, object)required: Field is required (true/false)min: Minimum value for numbersmax: Maximum value for numbersminLength: Minimum length for stringsmaxLength: Maximum length for stringspattern: Regex pattern for stringsenum: List of allowed values
Example:
{
"email": {
"type": "string",
"required": true,
"pattern": "^[\\w.-]+@[\\w.-]+\\.\\w+$"
},
"age": {
"type": "number",
"min": 0,
"max": 150
},
"status": {
"type": "string",
"enum": ["active", "inactive", "pending"]
}
}
Development
API Client Docs
Looking for the HTTP client and pagination helpers? See the dedicated docs in
etlplus/api/README.md for:
- Quickstart with
EndpointClient - Authentication via
EndpointCredentialsBearer - Pagination with
PaginationConfig(page and cursor styles) - Tips on
records_pathandcursor_path
Runner Internals and Connectors
Curious how the pipeline runner composes API requests, pagination, and load calls?
- Runner overview and helpers:
docs/run-module.md - Unified "connector" vocabulary (API/File/DB):
etlplus/config/connector.py- API/file targets reuse the same shapes as sources; API targets typically set a
method.
- API/file targets reuse the same shapes as sources; API targets typically set a
Running Tests
pytest tests/ -v
Test Layers
We split tests into two layers:
- Unit (
tests/unit/): single function or class, no real I/O, fast, uses stubs/monkeypatch (e.g.etlplus.cli.create_parser, transform + validate helpers). - Integration (
tests/integration/): end-to-end flows (CLImain(), pipelinerun(), pagination + rate limit defaults, file/API connector interactions) may touch temp files and use fake clients.
If a test calls etlplus.cli.main() or etlplus.run.run() it’s integration by default. Full
criteria: CONTRIBUTING.md#testing.
Code Coverage
pytest tests/ --cov=etlplus --cov-report=html
Linting
flake8 etlplus/
black etlplus/
Updating Demo Snippets
DEMO.md shows the real output of etlplus --version captured from a freshly built wheel. Regenerate
the snippet (and the companion file docs/snippets/installation_version.md) after changing anything that affects the version string:
make demo-snippets
The helper script in tools/update_demo_snippets.py builds the wheel,
installs it into a throwaway virtual environment, runs etlplus --version, and rewrites the snippet
between the markers in DEMO.md.
Releasing to PyPI
setuptools-scm derives the package version from Git tags, so publishing is now entirely tag
driven—no hand-editing pyproject.toml, setup.py, or etlplus/__version__.py.
- Ensure
mainis green and the changelog/docs are up to date. - Create and push a SemVer tag matching the
v*.*.*pattern:
git tag -a v1.4.0 -m "Release v1.4.0"
git push origin v1.4.0
- GitHub Actions fetches tags, builds the sdist/wheel, and publishes to PyPI via the
publishjob in .github/workflows/ci.yml.
If you want an extra smoke-test before tagging, run make dist && pip install dist/*.whl locally;
this exercises the same build path the workflow uses.
Links
- API client docs:
etlplus/api/README.md - Examples:
examples/README.md - Pipeline authoring guide:
docs/pipeline-guide.md - Runner internals:
docs/run-module.md - Design notes (Mapping inputs, dict outputs):
docs/pipeline-guide.md#design-notes-mapping-inputs-dict-outputs - Typing philosophy:
CONTRIBUTING.md#typing-philosophy - Demo and walkthrough:
DEMO.md - Additional references:
REFERENCES.md
License
This project is licensed under the MIT License.
Contributing
Code and codeless contributions are welcome! If you’d like to add a new feature, fix a bug, or improve the documentation, please feel free to submit a pull request as follows:
- Fork this repository.
- Create a new feature branch for your changes (
git checkout -b feature/feature-name). - Commit your changes (
git commit -m "Add feature"). - Push to your branch (
git push origin feature-name). - Submit a pull request with a detailed description.
If you choose to be a code contributor, please first refer these documents:
- Pipeline authoring guide:
docs/pipeline-guide.md - Design notes (Mapping inputs, dict outputs):
docs/pipeline-guide.md#design-notes-mapping-inputs-dict-outputs - Typing philosophy (TypedDicts as editor hints, permissive runtime):
CONTRIBUTING.md#typing-philosophy
Acknowledgments
ETLPlus is inspired by common work patterns in data engineering and software engineering patterns in Python development, aiming to increase productivity and reduce boilerplate code. Feedback and contributions are always appreciated!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file etlplus-0.3.5.tar.gz.
File metadata
- Download URL: etlplus-0.3.5.tar.gz
- Upload date:
- Size: 188.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
77f9d99f43736a605a8b5dcbac9b07c8d9c0ef9ace6a978c8faf2357d4f3765e
|
|
| MD5 |
15dd92c2a2b3ce977b707149f09ef1a2
|
|
| BLAKE2b-256 |
4e3eeedc71ce2b6fba0caec7fcbf4ce7c5136e9ab8fa82835b81c13534e9eb45
|
Provenance
The following attestation bundles were made for etlplus-0.3.5.tar.gz:
Publisher:
ci.yml on Dagitali/ETLPlus
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
etlplus-0.3.5.tar.gz -
Subject digest:
77f9d99f43736a605a8b5dcbac9b07c8d9c0ef9ace6a978c8faf2357d4f3765e - Sigstore transparency entry: 771309803
- Sigstore integration time:
-
Permalink:
Dagitali/ETLPlus@3048444e0b31aa6d8c65c2c142c083c30cc62012 -
Branch / Tag:
refs/tags/v0.3.5 - Owner: https://github.com/Dagitali
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@3048444e0b31aa6d8c65c2c142c083c30cc62012 -
Trigger Event:
push
-
Statement type:
File details
Details for the file etlplus-0.3.5-py3-none-any.whl.
File metadata
- Download URL: etlplus-0.3.5-py3-none-any.whl
- Upload date:
- Size: 118.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6b0ee89bdf9ba48002c9a8e18de02d5aa7a65b4d3fb7f9f3083d40d19fdfb456
|
|
| MD5 |
47de0b1f3fd7eef8b37773a7a77832e1
|
|
| BLAKE2b-256 |
1ccb443dbbf266670cf70a6f6525539b795614dd6120412f065463dd745adf84
|
Provenance
The following attestation bundles were made for etlplus-0.3.5-py3-none-any.whl:
Publisher:
ci.yml on Dagitali/ETLPlus
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
etlplus-0.3.5-py3-none-any.whl -
Subject digest:
6b0ee89bdf9ba48002c9a8e18de02d5aa7a65b4d3fb7f9f3083d40d19fdfb456 - Sigstore transparency entry: 771309822
- Sigstore integration time:
-
Permalink:
Dagitali/ETLPlus@3048444e0b31aa6d8c65c2c142c083c30cc62012 -
Branch / Tag:
refs/tags/v0.3.5 - Owner: https://github.com/Dagitali
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@3048444e0b31aa6d8c65c2c142c083c30cc62012 -
Trigger Event:
push
-
Statement type: