Skip to main content

A well orgnized way and tool to write a mri data analysis workflow

Project description

install

pip install neuroworkfow

overview

provide a way to format code for processing neural data, get researches rid of file finding, name jointing and other dirty works when writing data process workflow.

Offer tools for quick inspecting pipline such as:

  • generate a pseudo dataset to simplify how to arrange a dataset to run a pipline or how a data set looks like after running
  • generate a subpipline from a existing pipline, this is useful when only partially result is needed or using some intermediate result.

usage

to write a data processing pipline using neuroworkflow, one need use three basic object: component, work and workflow. when run a processing pipline, one need additionally provide metadata which provide prossesed information such as subject, session and other setting.

Component

component is the representation of a file.

initial

when writing pipline, one need add propertis of a component to define a component. like

example_component = Component(suffix = 'bold', datatype = 'func', extension = 'nii')

some of the available properties are

property function example
desc describe smooth6mm
datatype data type func
suffix suffix bold
extension extension nii.gz
task task eyesopen
space space MNI152
echo echo 2
data_place folder to place echo_2
use_extension force using extension true

when running, component will generate a file's name and directory according to its property, metadata and it's position in the whole pipline.

init_from

to simplify the initialization of a component, one can use init from to initialize a component

example_epi_json = Component.init_from(origin_epi_list, extension = 'json')

this will init a

init_multi

Work

a work is the representation of a processing step.

initial

when writing pipline, one need add components to a work's input_components and output_components indicate what to process and what will be generated, add action as the processs.


copy_epis = Work(f'copy_epi',
                  [origin_epi_list],
                  [copied_origin_epi_list],
                  action = copy_file
                  )

and the copy_file is

def copy_file(input_file, output_file):
    shutil.copyfile(input_file[0], output_file[0])

a action should accept two (input_file, output_file) or three (input_file, output_file, run_meta_data) parameters.

CommandWork

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neuroworkflow-0.1.2b0.tar.gz (46.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

neuroworkflow-0.1.2b0-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file neuroworkflow-0.1.2b0.tar.gz.

File metadata

  • Download URL: neuroworkflow-0.1.2b0.tar.gz
  • Upload date:
  • Size: 46.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for neuroworkflow-0.1.2b0.tar.gz
Algorithm Hash digest
SHA256 482874f1b8a6efb1cd58f9fe2e8bd0ac1fdfaf52a24a8653bfeb38662e8cfdf2
MD5 dab1909ad1432c18f8400ac3cbca7c8e
BLAKE2b-256 c3b97d5682f6eb9b613eb335d79a0a4c409507c81a8f1d31fe6eeb5c80e3f61b

See more details on using hashes here.

File details

Details for the file neuroworkflow-0.1.2b0-py3-none-any.whl.

File metadata

File hashes

Hashes for neuroworkflow-0.1.2b0-py3-none-any.whl
Algorithm Hash digest
SHA256 e79a912cd8f0ea1e3617a3d910e5ed6b3d337a43fedaeaa1feecd86c7ea8ca68
MD5 f6ab49c9f8a984ecd4378b5c233c6ae3
BLAKE2b-256 6a65c5bc6e65310fbf1864b5f36aef829d0ee10ee61b492f975398a3b542427e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page