Skip to main content

A Python library for defining, managing, and executing function pipelines.

Project description

PipeFunc: Structure, Automate, and Simplify Your Computational Workflows 🕸

Stop micromanaging execution. Focus on the science. Capture your workflow's essence with function pipelines, represent computations as DAGs, and automate parallel sweeps.

Python PyPi Ruff pytest Conda Coverage CodSpeed Badge Documentation Downloads GitHub

:books: Table of Contents

:thinking: What is this?

asciicast

pipefunc is a Python library designed for creating and executing function pipelines. By simply annotating functions and specifying their outputs, it builds a pipeline that automatically manages the execution order based on dependencies. Visualize the pipeline as a directed graph, execute the pipeline for all (or specific) outputs, add multidimensional sweeps, automatically parallelize the pipeline, and get nicely structured data back.

[!NOTE] A pipeline is a sequence of interconnected functions, structured as a Directed Acyclic Graph (DAG), where outputs from one or more functions serve as inputs to subsequent ones. pipefunc streamlines the creation and management of these pipelines, offering powerful tools to efficiently execute them.

Whether you're working with data processing, scientific computations, machine learning (AI) workflows, or any other scenario involving interdependent functions, pipefunc helps you focus on the logic of your code while it handles the intricacies of function dependencies and execution order.

:rocket: Key Features

  1. 🚀 Function Composition and Pipelining: Create pipelines by using the @pipefunc decorator; execution order is automatically handled.
  2. 📊 Pipeline Visualization: Generate visual graphs of your pipelines to better understand the flow of data.
  3. 👥 Multiple Outputs: Handle functions that return multiple results, allowing each result to be used as input to other functions.
  4. 🔁 Map-Reduce Support: Perform "map" operations to apply functions over data and "reduce" operations to aggregate results, allowing n-dimensional mappings.
  5. 👮 Type Annotations Validation: Validates the type annotations between functions to ensure type consistency.
  6. 🎛️ Resource Usage Profiling: Get reports on CPU usage, memory consumption, and execution time to identify bottlenecks and optimize your code.
  7. 🔄 Automatic parallelization: Automatically runs pipelines in parallel (local or remote) with shared memory and disk caching options.
  8. 🔍 Parameter Sweep Utilities: Generate parameter combinations for parameter sweeps and optimize the sweeps with result caching.
  9. 💡 Flexible Function Arguments: Call functions with different argument combinations, letting pipefunc determine which other functions to call based on the provided arguments.
  10. 🏗️ Leverages giants: Builds on top of NetworkX for graph algorithms, NumPy for multi-dimensional arrays, and optionally Xarray for labeled multi-dimensional arrays, Zarr to store results in memory/disk/cloud or any key-value store, and Adaptive for parallel sweeps.
  11. 🤓 Nerd stats: >600 tests with 100% test coverage, fully typed, only 4 required dependencies, all Ruff Rules, all public API documented.

:test_tube: How does it work?

pipefunc provides a Pipeline class that you use to define your function pipeline. You add functions to the pipeline using the pipefunc decorator, which also lets you specify the function's output name. Once your pipeline is defined, you can execute it for specific output values, simplify it by combining function nodes, visualize it as a directed graph, and profile the resource usage of the pipeline functions. For more detailed usage instructions and examples, please check the usage example provided in the package.

Here is a simple example usage of pipefunc to illustrate its primary features:

from pipefunc import pipefunc, Pipeline

# Define three functions that will be a part of the pipeline
@pipefunc(output_name="c")
def f_c(a, b):
    return a + b

@pipefunc(output_name="d")
def f_d(b, c):
    return b * c

@pipefunc(output_name="e")
def f_e(c, d, x=1):
    return c * d * x

# Create a pipeline with these functions
pipeline = Pipeline([f_c, f_d, f_e], profile=True)  # `profile=True` enables resource profiling

# Call the pipeline directly for different outputs:
assert pipeline("d", a=2, b=3) == 15
assert pipeline("e", a=2, b=3) == 75

# Visualize the pipeline
pipeline.visualize()

# Show resource reporting (only works if profile=True)
pipeline.print_profiling_stats()

This example demonstrates defining a pipeline with f_c, f_d, f_e functions, accessing and executing these functions using the pipeline, visualizing the pipeline graph, getting all possible argument mappings, and reporting on the resource usage. This basic example should give you an idea of how to use pipefunc to construct and manage function pipelines.

The following example demonstrates how to perform a map-reduce operation using pipefunc:

from pipefunc import pipefunc, Pipeline
from pipefunc.map import load_outputs
import numpy as np

@pipefunc(output_name="c", mapspec="a[i], b[j] -> c[i, j]")  # the mapspec is used to specify the mapping
def f(a: int, b: int):
    return a + b

@pipefunc(output_name="mean")  # there is no mapspec, so this function takes the full 2D array
def g(c: np.ndarray):
    return np.mean(c)

pipeline = Pipeline([f, g])
inputs = {"a": [1, 2, 3], "b": [4, 5, 6]}
pipeline.map(inputs, run_folder="my_run_folder", parallel=True)
result = load_outputs("mean", run_folder="my_run_folder")
print(result)  # prints 7.0

Here the mapspec argument is used to specify the mapping between the inputs and outputs of the f function, it creates the product of the a and b input lists and computes the sum of each pair. The g function then computes the mean of the resulting 2D array. The map method executes the pipeline for the inputs, and the load_outputs function is used to load the results of the g function from the specified run folder.

:notebook: Jupyter Notebook Example

See the detailed usage example and more in our example.ipynb.

:computer: Installation

Install the latest stable version from conda (recommended):

conda install pipefunc

or from PyPI:

pip install "pipefunc[all]"

or install main with:

pip install -U https://github.com/pipefunc/pipefunc/archive/main.zip

or clone the repository and do a dev install (recommended for dev):

git clone git@github.com:pipefunc/pipefunc.git
cd pipefunc
pip install -e ".[dev]"

:hammer_and_wrench: Development

We use pre-commit to manage pre-commit hooks, which helps us ensure that our code is always clean and compliant with our coding standards. To set it up, install pre-commit with pip and then run the install command:

pip install pre-commit
pre-commit install

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pipefunc-0.38.0.tar.gz (150.1 kB view details)

Uploaded Source

Built Distribution

pipefunc-0.38.0-py3-none-any.whl (130.7 kB view details)

Uploaded Python 3

File details

Details for the file pipefunc-0.38.0.tar.gz.

File metadata

  • Download URL: pipefunc-0.38.0.tar.gz
  • Upload date:
  • Size: 150.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for pipefunc-0.38.0.tar.gz
Algorithm Hash digest
SHA256 f4abb2964a7e94c8aa0e324afb1f3397bd49b7c89c1aaf3363762965d7a3d983
MD5 23217a1e93d94326dd6855a5c72dc495
BLAKE2b-256 1e317da21c1080826a078c5b84bd2cb3004ba8b2f10965828f6b7d98a938fe95

See more details on using hashes here.

Provenance

The following attestation bundles were made for pipefunc-0.38.0.tar.gz:

Publisher: pythonpublish.yml on pipefunc/pipefunc

Attestations:

File details

Details for the file pipefunc-0.38.0-py3-none-any.whl.

File metadata

  • Download URL: pipefunc-0.38.0-py3-none-any.whl
  • Upload date:
  • Size: 130.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for pipefunc-0.38.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f227573c5f0b75f5ef35a7aebddfc4fc74a7f696c0278f1febfd56107939b167
MD5 4669019ebc89076f877cd028b3c278ca
BLAKE2b-256 c9911011525833b178a88e5d35c3014921a9747e6eb880d9b5ea188f406de0e9

See more details on using hashes here.

Provenance

The following attestation bundles were made for pipefunc-0.38.0-py3-none-any.whl:

Publisher: pythonpublish.yml on pipefunc/pipefunc

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page