Skip to main content

ETL-style data processing framework with composable flow nodes — sources, transformers, reducers, and combiners

Project description

Data Pipeline

ETL-style data processing framework with composable flow nodes — sources, transformers, reducers, and combiners.

Installation

pip install vcti-data-pipeline>=1.0.0

In pyproject.toml dependencies

dependencies = [
    "vcti-data-pipeline>=1.0.0",
]

Quick Start

import numpy as np
from vcti.pipeline import DataSource, DataTransformer, DataRecord

# 1. Define a source
class CsvSource(DataSource):
    def __init__(self, path: str, *, name: str | None = None) -> None:
        super().__init__(name=name)
        self.path = path

    def load(self) -> DataRecord:
        arr = np.genfromtxt(self.path, dtype=None, delimiter=",", names=True)
        return DataRecord(data=arr)

# 2. Define a transformer
class ScaleTransformer(DataTransformer):
    def __init__(self, factor: float, *, name: str | None = None) -> None:
        super().__init__(name=name)
        self.factor = factor

    def transform(self, record: DataRecord) -> DataRecord:
        scaled = record.data.copy()
        scaled["value"] *= self.factor
        return DataRecord(data=scaled, attributes=record.attributes)

# 3. Compose the flow (right-to-left)
source = CsvSource("data.csv", name="raw-csv")
scaler = ScaleTransformer(factor=2.0, name="2x-scaler")
flow = scaler.connect(source)

# 4. Execute
result = flow.execute()

Combining multiple sources

from vcti.pipeline import DataCombiner

ids = IdSource()
coords = CoordSource()

combiner = DataCombiner()
flow = combiner.connect(ids)
flow = flow.connect(coords)

combined = flow.execute()
# combined.data has fields from both sources

Iterating over cases

from vcti.pipeline import ForEachFlow

flow = ForEachFlow(
    keys=CasesSource(reader),
    factory=lambda case_id: CaseDataSource(reader, case_id),
    key_field="ID",
)

for case_id, case_flow in flow:
    record = case_flow.execute()
    # process each case...

Flow Node Types

Type Inputs Purpose
DataSource 0 (leaf) Load data from external source
DataTransformer 1 Transform one record to another
DataReducer 1+ Combine multiple records into one
DataCombiner 1+ Merge arrays field-wise (extends DataReducer)
DataTarget 1 Persist record, pass through
ForEachFlow Iterate over keyed cases

Other types

Type Purpose
DataRecord Alias for DataNode — the data payload flowing between nodes
Column Frozen dataclass coupling column name with numpy dtype
FlowNode Abstract base for all flow nodes

Dependencies

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vcti_data_pipeline-1.1.0.tar.gz (14.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vcti_data_pipeline-1.1.0-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file vcti_data_pipeline-1.1.0.tar.gz.

File metadata

  • Download URL: vcti_data_pipeline-1.1.0.tar.gz
  • Upload date:
  • Size: 14.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vcti_data_pipeline-1.1.0.tar.gz
Algorithm Hash digest
SHA256 46223ac2e4a880bc79dc371e1b61de2740cf9eae9bfe5351b9a0d63bdb88401e
MD5 bc81173b2874e47bc1f2b3728fbfd79a
BLAKE2b-256 20b75dfbfbcd29132a27fe9a03e945ee9ef5cf4edff5cc6c641222b2a1108212

See more details on using hashes here.

Provenance

The following attestation bundles were made for vcti_data_pipeline-1.1.0.tar.gz:

Publisher: publish.yml on vcollab/vcti-python-data-pipeline

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file vcti_data_pipeline-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for vcti_data_pipeline-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 78ce2b94416d8bcdbcfc366bc47f273a0377cc5a730d547fd1f256cb52ee8516
MD5 ab9f81139387a4fbfdc770df34f36d72
BLAKE2b-256 13f4f4baca5eee0b72faf0f55d15170dc3cdb5b556a95eba66f6fc04b4acd35c

See more details on using hashes here.

Provenance

The following attestation bundles were made for vcti_data_pipeline-1.1.0-py3-none-any.whl:

Publisher: publish.yml on vcollab/vcti-python-data-pipeline

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page