CMRR MRI Processing Libraries
Project description
CMPL — CMRR MRI Processing Libraries
CMPL (CMRR MRI Processing Libraries) is a research-oriented Python toolkit for MRI data I/O, reconstruction, quantitative mapping, segmentation utilities, and visualization. It focuses on practical building blocks that can be composed into MRI processing workflows.
- Project homepage: this repository
- License: see LICENSE
- Python: >= 3.10
Installation
CMPL is published as a Python package. A standard installation will also install the required dependencies listed in pyproject.toml.
pip install cmpl
Notes
- Some features require optional system libraries (e.g., SimpleITK) and a working CUDA-enabled PyTorch if you plan to use GPU-accelerated routines (quantitative mapping functions currently expect CUDA tensors).
- If you use Jupyter for visualization widgets, ensure ipywidgets is enabled in your environment.
Package overview
CMPL exposes the following top-level namespaces for convenience (via cmpl.init):
- cmpl.utilities (alias cmpl.utils)
- cmpl.visualization (alias cmpl.vis)
- cmpl.segmentation (alias cmpl.seg)
- cmpl.quantitative_MRI (alias cmpl.qmr)
- cmpl.reconstruction (alias cmpl.recon)
You can also access the package version at runtime:
import cmpl
print(cmpl.__version__)
Key features and APIs
1) MRI k-space reconstruction (GRAPPA, SENSE)
Location: src/cmpl/reconstruction
GRAPPA (Generalized Autocalibrating Partially Parallel Acquisitions)
- 1D GRAPPA: cmpl.reconstruction.grappa.grappa_1D.grappa_1d_recon
- 2D GRAPPA: cmpl.reconstruction.grappa.grappa_2D.grappa_2d_recon
Axis ordering for GRAPPA
- Expected k-space shape: [frequency, phase, slice, coils]
- For 1D GRAPPA, a 3D variant is supported via is3D flag; internally, slices are handled along the third dimension.
- The undersampled k-space must contain acquired data in the 0th column of the undersampled positions.
Example — 1D GRAPPA (slice-wise)
import numpy as np
from cmpl.reconstruction.grappa.grappa_1D import grappa_1d_recon
# calibration_kspace, undersampled_kspace: complex64 numpy arrays of shape [freq, phase, slices, coils]
R = 2 # reduction factor along phase-encode
kx, ky = 5, 3 # kernel size (height, width)
recon_kspace = grappa_1d_recon(calibration_kspace, undersampled_kspace, R, kx, ky, is3D=False)
Example — 2D GRAPPA (accelerated in two phase directions)
import numpy as np
from cmpl.reconstruction.grappa.grappa_2D import grappa_2d_recon
# calibration_kspace, undersampled_kspace: complex64 numpy arrays of shape [freq, phase, slice, coils]
kernel_size = (5, 3, 3) # (height, width, depth) in k-space blocks
reduction_factors = (2, 2) # (phase_undersampling, slice_undersampling)
recon_kspace = grappa_2d_recon(calibration_kspace, undersampled_kspace, kernel_size, reduction_factors)
SENSE (CG-SENSE 2D)
- cmpl.reconstruction.sense.cg.CG_sense_2D(undersampled_image_space, coil_sensitivity, dims=[-3, -2])
- Input/Output are torch.complex tensors. A binary mask is inferred as (undersampled_image_space != 0).
Example — CG-SENSE (2D)
import torch as pt
from cmpl.reconstruction.sense.cg import CG_sense_2D
# undersampled_image_space: complex tensor [..., x, y, coils]
# coil_sensitivity: complex tensor [..., x, y, coils]
final_recon = CG_sense_2D(undersampled_image_space, coil_sensitivity, dims=[-3, -2])
Utility — Convert k-space to image space
import numpy as np
from cmpl.utilities.utils import kspace_to_image_space
# kspace: shape [..., coils] (coil dimension may be last or specified via coil_column_loc)
combined_image, coil_images = kspace_to_image_space(kspace, fourier_dims=[0,1,2], coil_column_loc=-1, return_coil_images=True)
2) Quantitative MRI — T2* mapping
Location: src/cmpl/quantitative_MRI/mapping.py
Functions (GPU expected; uses PyTorch CUDA internally)
- t2_star_two_parametric_2D(TE_all, images, ...)
- images shape: (x, y, TE)
- returns (T2_star_map, S0_map)
- t2_star_three_parametric_2D(TE_all, images, ...)
- images shape: (x, y, TE); includes an offset C parameter
- returns (T2_star_map, S0_map, C_map)
- t2_star_two_parametric_3D(TE_all, images, ...)
- images shape: (x, y, z, TE)
- t2_star_three_parametric_3D(TE_all, images, ...)
- images shape: (x, y, z, TE)
- reconstruct_images(T2_star_map, S0_map, TE_all)
- calculate_rmse_percentage_s0(original_images, reconstructed_images, S0_map)
Example — 2D two-parameter T2*
import numpy as np
from cmpl.quantitative_MRI.mapping import t2_star_two_parametric_2D
TE_all = np.array([3.5, 8.0, 12.5, 17.0], dtype=np.float32)
images = np.random.rand(256, 256, len(TE_all)).astype(np.float32)
T2_star_map, S0_map = t2_star_two_parametric_2D(TE_all, images, num_iterations=2000, initial_lr=0.01)
GPU requirement
- These functions call .cuda() on tensors. Ensure a CUDA-enabled PyTorch installation and a supported GPU.
3) Segmentation utilities
Location: src/cmpl/segmentation
-
tools.py: Projection helpers for 3D label volumes; reading/writing NIfTI through cmpl.utilities.io.
- project_3d_matrix(matrix, value, axis)
- extract_extrusion(extrusion_path, seg_path, projection_value=11)
-
MRISegmentationTool.py: AutoSegmentation helper around a user-provided PyTorch model.
- AutoSegmentation.set_model(model, echos)
- AutoSegmentation.load_model_state_dict(model_path)
- AutoSegmentation.load_dicom_dir(directory)
- AutoSegmentation.auto_segment()
- AutoSegmentation.save_nifti(output_file_path)
Example — AutoSegmentation workflow
import torch as pt
from cmpl.segmentation.MRISegmentationTool import AutoSegmentation
model = ... # your torch.nn.Module
my_echos = [0,1,2,3,4,5,6] # indices of echoes the model expects
seg = AutoSegmentation(device='cuda', verbosity=1)
seg.set_model(model, my_echos)
seg.load_model_state_dict('path/to/model_weights.pth')
seg.load_dicom_dir('path/to/Dicoms')
seg.auto_segment()
seg.save_nifti('path/to/output_seg.nii.gz')
4) I/O and format conversion
Location: src/cmpl/utilities/io.py
- nifti_read(path, re_orient=True) -> (nifti, data)
- load_dicom_scan_from_dir(directory, reshape=True, verbose=False, with_spacing=False)
- Returns numpy array with shape [x, y, z] or [x, y, z, echo]; optional (origin, spacing, orientation)
- update_nifti_data(file_path, new_data, output_path=None)
- dicom_to_SimpleITK(dicom_directory) -> sitk.Image (3D or 4D when multi-echo)
- itk_to_nifti(itk_image, nifti_path, verbose=True)
- itk_mask_correction(img_nifti, mask_nifti, tol=1e-1, return_axis=False)
Example — Load DICOM series and save a NIfTI copy
from cmpl.utilities.io import load_dicom_scan_from_dir, update_nifti_data
imgs = load_dicom_scan_from_dir('path/to/dicoms', reshape=True)
# ... process imgs ...
update_nifti_data('template.nii.gz', imgs, 'processed.nii.gz')
5) General utilities
Location: src/cmpl/utilities/utils.py and df_build.py
- h5_to_nifti(input_file, output_file)
- prepare_zipped_dicom(zip_path, extract_path)
- dicom_to_h5(dicom_directory, h5py_path, contrast='3D_gre_sag', num_contrasts=7, num_slices_per_contrast=120)
- kspace_to_image_space(kspace, fourier_dims=[0,1,2], coil_column_loc=-1, return_coil_images=False)
- apply_hamming_filter_4d_numpy(array4d, dim1, dim2)
- resize_complex_matrix_fft(image, target_shape)
- zero_pad(tensor_or_array, final_shape)
- resize_matrix(matrix2d, target_shape=(600,600))
- df_build.build_medical_data_frame(root_dir) -> pandas.DataFrame built from a specific folder layout (Dicoms, h5_files, Segmentations)
6) Visualization
Location: src/cmpl/visualization/visualization.py
- side_by_side_view(*images, color_palette='gray', dpi=100, titles=None)
- visualize_segmentation_slice(grayscale_image, segmentation_matrix, slice_number, dimension='axial', target_shape=(600,600))
- plot_3D_mri(mri_image, slice_number=None, direction='sagittal', segmentation=None, alpha=0.5, dpi=150, target_shape=None, m_cmap='gray')
Example — Overlay segmentation on an MRI slice
from cmpl.visualization.visualization import visualize_segmentation_slice
visualize_segmentation_slice(mri_3d, seg_3d, slice_number=50, dimension='axial', target_shape=(600,600))
Data types and conventions
- Complex arrays are represented as numpy complex64 or torch.complex64 depending on the function.
- Unless otherwise stated, GRAPPA functions expect arrays ordered as [frequency, phase, slice, coils]. Use numpy.moveaxis to adjust ordering if needed.
- Quantitative mapping routines currently allocate tensors on CUDA. If you do not have a GPU, consider adapting the code (removing .cuda()) or using a CUDA-enabled environment.
Development notes
- The codebase uses PyTorch, NumPy/SciPy, nibabel, pydicom, and SimpleITK. See pyproject.toml for exact versions.
- Progress bars are provided via tqdm in some routines.
How to cite
If you use CMPL in a scientific publication, please cite the toolkit and the underlying algorithms (GRAPPA, SENSE, etc.). A formal citation entry will be provided in future releases.
License
See LICENSE in the repository.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file cmpl-0.1.8.tar.gz.
File metadata
- Download URL: cmpl-0.1.8.tar.gz
- Upload date:
- Size: 37.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7ac401f0e39b101c296b94c9d05eaf5f6aec580e0d73828206d71da8065a7ec6
|
|
| MD5 |
31bfcccdd7b5f847b807f3b1d241e466
|
|
| BLAKE2b-256 |
33304191d0ba5b71151571e6ae02c90fc891878a2aa1cd5ab0006e3315c8a04c
|
File details
Details for the file cmpl-0.1.8-py3-none-any.whl.
File metadata
- Download URL: cmpl-0.1.8-py3-none-any.whl
- Upload date:
- Size: 40.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9740032068d3eb3768974e22e78418c48d98be12e3f9b84bdd8df422e9cc8d9e
|
|
| MD5 |
f82ca74e0fcbba9056c6ff225fe7f140
|
|
| BLAKE2b-256 |
5b6a9dd932d2a2745525898b99c4926540e97f1cfe72c43720582a4faa02ad23
|