A well orgnized way and tool to write a mri data analysis workflow
Project description
install
pip install neuroworkfow
overview
provide a way to format code for processing neural data, get researches rid of file finding, name jointing and other dirty works when writing data process workflow.
Offer tools for quick inspecting pipline such as:
- generate a pseudo dataset to simplify how to arrange a dataset to run a pipline or how a data set looks like after running
- generate a subpipline from a existing pipline, this is useful when only partially result is needed or using some intermediate result.
usage
to write a data processing pipline using neuroworkflow, one need use three basic object: component, work and workflow. when run a processing pipline, one need additionally provide metadata which provide prossesed information such as subject, session and other setting.
Component
component is the representation of a file.
initial
when writing pipline, one need add propertis of a component to define a component. like
example_component = Component(suffix = 'bold', datatype = 'func', extension = 'nii')
some of the available properties are
| property | function | example |
|---|---|---|
| desc | describe | smooth6mm |
| datatype | data type | func |
| suffix | suffix | bold |
| extension | extension | nii.gz |
| task | task | eyesopen |
| space | space | MNI152 |
| echo | echo | 2 |
| data_place | folder to place | echo_2 |
| use_extension | force using extension | true |
when running, component will generate a file's name and directory according to its property, metadata and it's position in the whole pipline.
init_from
to simplify the initialization of a component, one can use init from to initialize a component
example_epi_json = Component.init_from(origin_epi_list, extension = 'json')
this will init a
init_multi
Work
a work is the representation of a processing step.
initial
when writing pipline, one need add components to a work's input_components and output_components indicate what to process and what will be generated, add action as the processs.
copy_epis = Work(f'copy_epi',
[origin_epi_list],
[copied_origin_epi_list],
action = copy_file
)
and the copy_file is
def copy_file(input_file, output_file):
shutil.copyfile(input_file[0], output_file[0])
a action should accept two (input_file, output_file) or three (input_file, output_file, run_meta_data) parameters.
CommandWork
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file neuroworkflow-0.1.1b0.tar.gz.
File metadata
- Download URL: neuroworkflow-0.1.1b0.tar.gz
- Upload date:
- Size: 46.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a33b0a2c26859dac5b6db41ef57eddf0cde8c28f6f591f72e33c99d7df16f767
|
|
| MD5 |
50cafb76127c7839453f5d53399c13d7
|
|
| BLAKE2b-256 |
f1d04faa9f8525f23262da150802529dc90e60b574ffb5328b4c38f0a010dfc6
|
File details
Details for the file neuroworkflow-0.1.1b0-py3-none-any.whl.
File metadata
- Download URL: neuroworkflow-0.1.1b0-py3-none-any.whl
- Upload date:
- Size: 9.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6b91e8bb0f1e104ecc456222483ffe1316248ada4171aad17b70d9a5ba528dea
|
|
| MD5 |
1535d9407b5aab9d666624ab560f28fe
|
|
| BLAKE2b-256 |
56cc213d2af461968ceabd669da1745e6d38e7a9840f9aa5072d216f3d985556
|