Skip to main content

Extract meta-data from DICOM and NIFTI files

Project description

CHUV License Codacy Badge Code Health CircleCI PyPI

MRI Meta-data Extractor

This is a Python library providing methods to scan folders, extract meta-data from files (DICOM, NIFTI, …) and store them in a database.

Install

Run pip install data-tracking. (Only tested with Python3)

Use

Import the functions you need like this : from data_tracking.files_recording import create_provenance, visit.

Create a provenance entity using :

create_provenance(dataset, software_versions, db_url)

Create (or get if already exists) a provenance entity, store it in the database and get back a provenance ID.
* param dataset: Name of the data set.
* param software_versions: (optional) Version of the software components used to get the data. It is a dictionary
  that accepts the following fields:
    - matlab_version
    - spm_version
    - spm_revision
    - fn_called
    - fn_version
    - others
* param db_url: (optional) Database URL. If not defined, it looks for an Airflow configuration file.
* return: Provenance ID.

Scan a folder to populate the database :

def visit(folder, provenance_id, step_name, previous_step_id, config, db_url)

Record all files from a folder into the database.
The files are listed in the DB. If a file has been copied from previous step without any transformation, it will be
detected and marked in the DB. The type of file will be detected and stored in the DB. If a files (e.g. a DICOM
file) contains some meta-data, those will be stored in the DB.
* param folder: folder path.
* param provenance_id: provenance label.
* param step_name: Name of the processing step that produced the folder to visit.
* param previous_step_id: (optional) previous processing step ID. If not defined, we assume this is the first
  processing step.
* param config: List of flags:
    - boost: (optional) When enabled, we consider that all the files from a same folder share the same meta-data.
      When enabled, the processing is (about 2 times) faster. This option is enabled by default.
    - session_id_by_patient: Rarely, a data set might use study IDs which are unique by patient (not for the whole study).
      E.g.: LREN data. In such a case, you have to enable this flag. This will use PatientID + StudyID as a session ID.
    - visit_id_in_patient_id: Rarely, a data set might mix patient IDs and visit IDs. E.g. : LREN data. In such a case, you have
      to enable this flag. This will try to split PatientID into VisitID and PatientID.
    - visit_id_from_path: Enable this flag to get the visit ID from the folder hierarchy instead of DICOM meta-data
      (e.g. can be useful for PPMI).
    - repetition_from_path: Enable this flag to get the repetition ID from the folder hierarchy instead of DICOM meta-data
      (e.g. can be useful for PPMI).
* param db_url: (optional) Database URL. If not defined, it looks for an Airflow configuration file.
* param is_organised: (optional) Disable this flag when scanning a folder that has not been organised yet
  (should only affect nifti files).
* return: return processing step ID.

Build

Run ./build.sh. (Builds for Python3)

(This includes the auto-generation of the README.rst based on the README.md)

Test

Enter the tests directory.

With Docker

Run ./test.sh

Without Docker

  • Run a Postgres database on localhost:5432.

  • Run nosetest unit_test.py

Publish on PyPi

Run ./publish.sh.

(This builds the project prior to pushing on PyPi)

NOTE : Do not forget to update the version number in the setup.py prior to publishing.

NOTES

  • This project contains a reference to a Git submodule. You can use the --recursive flag when cloning the project to clone the submodule too.

Acknowledgements

This work has been funded by the European Union Seventh Framework Program (FP7/2007­2013) under grant agreement no. 604102 (HBP)

This work is part of SP8 of the Human Brain Project (SGA1).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

data_tracking-1.7.3-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file data_tracking-1.7.3-py3-none-any.whl.

File metadata

  • Download URL: data_tracking-1.7.3-py3-none-any.whl
  • Upload date:
  • Size: 17.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.11.0 pkginfo/1.4.2 requests/2.13.0 setuptools/34.3.1 requests-toolbelt/0.8.0 tqdm/4.24.0 CPython/3.5.1

File hashes

Hashes for data_tracking-1.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 c62a8bf6ca841aebd826ef58de8f66a2da7905393c9a86359501fa0dc5bc8e08
MD5 e1244676cba3d58ace1d6369c2f5a75c
BLAKE2b-256 64cef6a7582e714e2bf0102eee8e5ef7a29758bf7bd826045a9ad7c5051f26a7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page