Framework to organize processing code outputs to/from disk, processing chaining and versionning with a common easy to use api
Project description
Pypelines
More documentation on this site
Installation
pip install processing-pypelines
or with pdm
pdm add processing-pypelines
Intro
This package aims at providing a lightweight yet powerfull framework for chaining processing tasks, on experimental sessions. It does so in a "data agnostic" way, which means it is interfaceable with already existing packages, that themselves provide some sort of pipelining (such as suite2p for example).
Out of the box, it most easily handles processing steps that takes diven set of parameters and/or files as input, and outputs built-in python dictionnaries, or pandas DataFrames.
To interface if with more complex or intricated input/output data structures, you need to implement custom versions of DiskObject, wich are the readers/writers that define how to check the
existence of a Step output (to know when to skip it if the output exists) how to save such output, and how to load it in case a Step further in the Pipeline wants to access this output for further processing stages.
Concepts
As so, I identified that chopping tasks needed two key concepts that may facilitate the life of the developpers : Pipe and Step
-
StepA step is a processing stage. It takes an input 0 to virtually unlimited inputs and performs python execution to generate an output. The existance of it's output can be verified with the atachedDiskObject. -
PipeA pipe is a collection of Steps. In real life situations, it is very often that the same data structure can be having several Steps that add some data to it. For example, imaging data can be obtained, and segmented into tables of neurons fluorescence over time. As the execution goes further in the processing pipeline, some of the processing steps may calculate the responsivness of the neurons to one or another type of stimulation, obtained in another step. It then seems logical to only add this responsiveness information to the tables data, to not overcrowd the disk with duplicated stages with increasingly detailed/further processed data. The pipe serves this purpose. When a givenSteprequires the output of anotherStep, it looks up wichPipe(e.g. general data structure) it is attached to, and loads the most advanced one available in thatPipe. As such, it means aStepoutput at level 6 of aPipe, has to also be valid for the data that it holds, to aStepthat requires the level 1 of thisPipe. (meaning : as a rule of thumb, don't delete data from aPipeoutput with increasing steps, but only add new fields/columns to it.)
Below, is the example of a full Pipeline graph :
On the same column, the dots represents the different Steps of a Pipe. A Pipe is then a single column on this graph. Links between steps represent the dependancies between them.
The most usefull parts that this package allows you do to is :
- to auto-resolve this graph simply based on what you declared that a
Stepneeds as input requirement (it will raise Exceptions to you about errors with cyclic dependancies you might have made, when creating the Pipeline at runtime, to help sort them out) - to automatically run the required upstream Steps, whenever you decide to get a specific
Steparbitrarily positionned in the tree.
Examples :
To implement a Pipeline, you have to define at least a Step, attached inside a Pipe.
Here is a simple example.
In only 70 lines of code, it allows to perform the automatic chaining for 4 Steps, loading and writing of intermediate processing outputs, without the developper requiring to take care for any of it.
You can test this example, and see the result on your own computer. (Two csv files located in the test/data folder of this repository will be used for demonstration purposes.)
from pypelines import Pipeline, BasePipe, BaseStep, Session, pickle_backend
from pathlib import Path
import pandas, numpy, json
ROIS_URL = "https://raw.githubusercontent.com/JostTim/pypelines/refs/heads/main/tests/data/rois_df.csv"
TRIALS_URL = "https://raw.githubusercontent.com/JostTim/pypelines/refs/heads/main/tests/data/trials_df.csv"
pipeline = Pipeline("my_neurophy_pipeline")
@pipeline.register_pipe
class ROIsTablePipe(BasePipe):
pipe_name = "rois_df"
disk_class = pickle_backend.PickleDiskObject
class InitialCalculation(BaseStep):
step_name = "read"
def worker(self, session, extra=""):
rois_data = pandas.read_csv(ROIS_URL).set_index("roi#")
rois_data["F_norm"] = rois_data["F_norm"].apply(json.loads)
return rois_data
@pipeline.register_pipe
class TrialsTablePipe(BasePipe):
pipe_name = "trials_df"
disk_class = pickle_backend.PickleDiskObject
class InitialRead(BaseStep):
step_name = "read"
def worker(self, session, extra = ""):
trials_data = pandas.read_csv(TRIALS_URL).set_index("trial#")
return trials_data
class AddFrameTimes(BaseStep):
step_name = "frame_times"
requires = "trials_df.read"
def worker(self, session, extra = "", sample_frequency_ms = 1000/30):
def get_frame(time_ms):
return int(numpy.round(time_ms / sample_frequency_ms))
trials_data = self.load_requirement("trials_df",session)
trials_data["trial_start_frame"] = trials_data["trial_start_global_ms"].apply(get_frame)
trials_data["stimulus_start_frame"] = trials_data["stimulus_start_ms"].apply(get_frame)
trials_data["stimulus_change_frame"] = trials_data["stimulus_change_ms"].apply(get_frame)
return trials_data
@pipeline.register_pipe
class TrialsCrossRoisTablePipe(BasePipe):
pipe_name = "trials_rois_df"
disk_class = pickle_backend.PickleDiskObject
class InitialMerge(BaseStep):
step_name = "merge"
requires = ["rois_df.read", "trials_df.frame_times"]
def worker(self, session, extra = ""):
trials_data = self.load_requirement("trials_df",session)
rois_data = self.load_requirement("rois_df",session)
trials_starts = trials_data["trial_start_frame"].to_list() + [len(rois_data["F_norm"].iloc[0])]
trials_rois_data = []
for roi_id, roi_details in rois_data.iterrows():
roi_details = roi_details.to_dict()
roi_fluorescence = roi_details.pop("F_norm")
for trial_nb, (trial_id, trial_details) in enumerate(trials_data.iterrows()):
new_row = {"roi#": roi_id, "trial#": trial_id}
new_row["F_norm"] = roi_fluorescence[trials_starts[trial_nb]:trials_starts[trial_nb+1]]
new_row.update(trial_details.to_dict())
new_row.update(roi_details)
trials_rois_data.append(new_row)
return pandas.DataFrame(trials_rois_data).set_index(["roi#", "trial#"])
pipeline.graph.draw()
session = Session(subject="test",date="2025-05-15",number=1,path=".",auto_path=True)
trials_roi_df = pipeline.trials_rois_df.merge.generate(session = session, check_requirements=True)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file processing_pypelines-0.0.93.tar.gz.
File metadata
- Download URL: processing_pypelines-0.0.93.tar.gz
- Upload date:
- Size: 9.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6144c4a11a8acc7c6ad9a2659848c5d1a2254b780e128b0be8e492cacbb63fe
|
|
| MD5 |
5c5f6ca76a351dd4b33832871605114d
|
|
| BLAKE2b-256 |
67279fc497cfa32cdf73e812afc03b522e05a4d23860a78a9db5bee595c45664
|
File details
Details for the file processing_pypelines-0.0.93-py3-none-any.whl.
File metadata
- Download URL: processing_pypelines-0.0.93-py3-none-any.whl
- Upload date:
- Size: 61.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
caf98ebc3b8f6a807436dcc50676f4cbbf77af6e3ffd7763a3e65b30405bed3d
|
|
| MD5 |
34f2e256d41e923b8fc1ecb36c7a8cc8
|
|
| BLAKE2b-256 |
d4b713d06207ccf75b589ca421f556773030854d673e9d2eb836e8364a229d31
|