Skip to main content

Data workflows, cli and dataflow automation.

Project description

🍋 pipelime

Documentation Status PyPI version

If life gives you lemons, use pipelime.

Welcome to pipelime, a swiss army knife for data processing!

pipelime is a full-fledge framework for data science: read your datasets, manipulate them, write back to disk or upload to a remote data lake. Then build up your dataflow with Piper and manage the configuration with Choixe. Finally, embed your custom commands into the pipelime workspace, to act both as dataflow nodes and advanced command line interface.

Maybe too much for you? No worries, pipelime is modular and you can just take out what you need:

  • data processing scripts: use the powerful SamplesSequence and create your own data processing pipelines, with a simple and intuitive API. Parallelization works out-of-the-box and, moreover, you can easily serialize your pipelines to yaml/json. Integrations with popular frameworks, e.g., pytorch, are also provided.
  • easy dataflow: Piper can manage and execute directed acyclic graphs (DAGs), giving back feedback on the progress through sockets or custom callbacks.
  • configuration management: Choixe is a simple and intuitive mini scripting language designed to ease the creation of configuration files with the help of variables, symbol importing, for loops, switch statements, parameter sweeps and more.
  • command line interface: pipelime can remove all the boilerplate code needed to create a beautiful CLI for you scripts and packages. You focus on what matters and we provide input parsing, advanced interfaces for complex arguments, automatic help generation, configuration management. Also, any PipelimeCommand can be used as a node in a dataflow for free!
  • pydantic tools: most of the classes in pipelime derive from pydantic.BaseModel, so we have built some useful tools to, e.g., inspect their structure, auto-generate human-friendly documentation and more (including a wizard to help you writing input data to deserialize any pydantic model).

Installation

Install pipelime using pip:

pip install pipelime-python

To be able to draw the dataflow graphs, you need the draw variant:

pip install pipelime-python[draw]

Warning

The draw variant needs Graphviz https://www.graphviz.org/ installed on your system On Linux Ubuntu/Debian, you can install it with:

sudo apt-get install graphviz graphviz-dev

Alternatively you can use conda

conda install --channel conda-forge pygraphviz

Please see the full options at https://github.com/pygraphviz/pygraphviz/blob/main/INSTALL.txt

Basic Usage

Underfolder Format

The Underfolder format is the preferred pipelime dataset formats, i.e., a flexible way to model and store a generic dataset through filesystem.

An Underfolder dataset is a collection of samples. A sample is a collection of items. An item is a unitary block of data, i.e., a multi-channel image, a python object, a dictionary and more. Any valid underfolder dataset must contain a subfolder named data with samples and items. Also, global shared items can be stored in the root folder.

Items are named using the following naming convention:

Where:

  • $ID is the sample index, must be a unique integer for each sample.
  • ITEM is the item name.
  • EXT is the item extension.

We currently support many common file formats and others can be added by users:

  • .png, .jpeg/.jpg/.jfif/.jpe, .bmp for images
  • .tiff/.tif for multi-page images and multi-dimensional numpy arrays
  • .yaml/.yml, .json and .toml/.tml for metadata
  • .txt for numpy 2D matrix notation
  • .npy for general numpy arrays
  • .pkl/.pickle for picklable python objects
  • .bin for generic binary data

Root files follow the same convention but they lack the sample identifier part, i.e., $ITEM.$EXT

Reading an Underfolder Dataset

pipelime provides an intuitive interface to read, manipulate and write Underfolder Datasets. No complex signatures, weird object iterators, or boilerplate code, you just need a SamplesSequence:

    from pipelime.sequences import SamplesSequence

    # Read an underfolder dataset with a single line of code
    dataset = SamplesSequence.from_underfolder('tests/sample_data/datasets/underfolder_minimnist')

    # A dataset behaves like a Sequence
    print(len(dataset))             # the number of samples
    sample = dataset[4]             # get the fifth sample

    # A sample is a mapping
    print(len(sample))              # the number of items
    print(set(sample.keys()))       # the items' keys

    # An item is an object wrapping the actual data
    image_item = sample["image"]    # get the "image" item from the sample
    print(type(image_item))         # <class 'pipelime.items.image_item.PngImageItem'>
    image = image_item()            # actually loads the data from disk (may have been on the cloud as well)
    print(type(image))              # <class 'numpy.ndarray'>

Writing an Underfolder Dataset

You can write a dataset by calling the associated operation:

    # Attach a "write" operation to the dataset
    dataset = dataset.to_underfolder('/tmp/my_output_dataset')

    # Now run over all the samples
    dataset.run()

    # You can easily spawn multiple processes if needed
    dataset.run(num_workers=4)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pipelime_python-1.8.1.tar.gz (152.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pipelime_python-1.8.1-py3-none-any.whl (201.9 kB view details)

Uploaded Python 3

File details

Details for the file pipelime_python-1.8.1.tar.gz.

File metadata

  • Download URL: pipelime_python-1.8.1.tar.gz
  • Upload date:
  • Size: 152.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for pipelime_python-1.8.1.tar.gz
Algorithm Hash digest
SHA256 58c8a56ea707e5a3cf02c259f227073574c3d2b2bf2859de39f1711be0421e2a
MD5 3a0b99876c4bcff09687e9f3eb290ed3
BLAKE2b-256 b8972c5943c8ffc783f4a73e8c2942155d4ecc9a962a56d9ef3f44aa5fa69e18

See more details on using hashes here.

File details

Details for the file pipelime_python-1.8.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pipelime_python-1.8.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3cb7a902591162b6a2eb131acbac96e5bceafdb200e0e8757f508a34ceb3ae76
MD5 1957c168f8014908faf523d82b59db0e
BLAKE2b-256 3249d6666ea6b7b00415eca16be35cc05405a95dd27b3817d0fecc490345dd46

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page