A robust, easytodeploy nonuniform Fast Fourier Transform in PyTorch.
Project description
Torch KBNUFFT
API  GitHub  Notebook Examples  Sedona Workshop Demo
Simple installation from PyPI:
pip install torchkbnufft
About
Torch KBNUFFT implements a nonuniform Fast Fourier Transform [1, 2] with KaiserBessel gridding in PyTorch. The implementation is completely in Python, facilitating robustness and flexible deployment in humanreadable code. NUFFT functions are each wrapped as a torch.autograd.Function
, allowing backpropagation through NUFFT operators for training neural networks.
This package was inspired in large part by the implementation of NUFFT operations in the Matlab version of the Michigan Image Reconstruction Toolbox, available at https://web.eecs.umich.edu/~fessler/code/index.html.
Operation Modes and Stages
The package has three major classes of NUFFT operation mode: tablebased NUFFT interpolation, sparse matrixbased NUFFT interpolation, and forward/backward operators with Toeplitzembedded FFTs [3]. In most cases, computation speed follows
table < sparse matrix < Toeplitz embedding,
but better computation speed can require increased memory usage.
In addition to the three main operation modes, the package separates SENSENUFFT operations into three stages that can be used individually as PyTorch modules: interpolation (torchkbnufft.KbInterp
), NUFFT (torchkbnufft.KbNufft
), and SENSENUFFT (torchkbnufft.MriSenseNufft
). The interpolation modules only apply interpolation (without scaling coefficients). The NUFFT applies the full the NUFFT for a single image into nonCartesian kspace, including scaling coefficients. The SENSENUFFT can be used to include sensitivity coil multiplications, which by default at a lower level will use PyTorch broadcasting backends that enable faster multiplications across the coils.
Where appropriate, each of the NUFFT stages can be used with each NUFFT operation mode. Simple examples follow.
Documentation
Most files are accompanied with docstrings that can be read with help
while running IPython. Example:
from torchkbnufft import KbNufft help(KbNufft)
Behavior can also be inferred by inspecting the source code here. An htmlbased API reference is here.
Examples
Simple Forward NUFFT
Jupyter notebook examples are in the notebooks/
folder. The following minimalist code loads a SheppLogan phantom and computes a single radial spoke of kspace data:
import torch import numpy as np from torchkbnufft import KbNufft from skimage.data import shepp_logan_phantom x = shepp_logan_phantom().astype(np.complex) im_size = x.shape x = np.stack((np.real(x), np.imag(x))) # convert to tensor, unsqueeze batch and coil dimension # output size: (1, 1, 2, ny, nx) x = torch.tensor(x).unsqueeze(0).unsqueeze(0) klength = 64 ktraj = np.stack( (np.zeros(64), np.linspace(np.pi, np.pi, klength)) ) # convert to tensor, unsqueeze batch dimension # output size: (1, 2, klength) ktraj = torch.tensor(ktraj).unsqueeze(0) nufft_ob = KbNufft(im_size=im_size) # outputs a (1, 1, 2, klength) vector of kspace data kdata = nufft_ob(x, ktraj)
A detailed example of basic NUFFT usage is here.
SENSENUFFT
The package also includes utilities for working with SENSENUFFT operators. The above code can be modified to replace the nufft_ob
with the following sensenufft_ob
:
from torchkbnufft import MriSenseNufft smap = torch.rand(1, 8, 2, 400, 400) sensenufft_ob = MriSenseNufft(im_size=im_size, smap=smap) sense_data = sensenufft_ob(x, ktraj)
Application of the object in place of nufft_ob
above would first multiply by the sensitivity coils in smap
, then compute a 64length radial spoke for each coil. All operations are broadcast across coils, which minimizes interaction with the Python interpreter, helping computation speed.
A detailed example of SENSENUFFT usage is here.
Sparse Matrix Precomputation
Sparse matrices are a fast operation mode on the CPU and for large problems at the cost of more memory usage. The following code calculates sparse interpolation matrices and uses them to compute a single radial spoke of kspace data:
from torchkbnufft import AdjKbNufft from torchkbnufft.nufft.sparse_interp_mat import precomp_sparse_mats adjnufft_ob = AdjKbNufft(im_size=im_size) # precompute the sparse interpolation matrices real_mat, imag_mat = precomp_sparse_mats(ktraj, adjnufft_ob) interp_mats = { 'real_interp_mats': real_mat, 'imag_interp_mats': imag_mat } # use sparse matrices in adjoint image = adjnufft_ob(kdata, ktraj, interp_mats)
A detailed example of sparse matrix precomputation usage is here.
Toeplitz Embedding
The package includes routines for calculating embedded Toeplitz kernels and using them as FFT filters for the forward/backward NUFFT operations [3]. This is very useful for gradient descent algorithms that must use the forward/backward ops in calculating the gradient. The following minimalist code shows an example:
from torchkbnufft import AdjKbNufft, ToepNufft from torchkbnufft.nufft.toep_functions import calc_toep_kernel adjnufft_ob = AdjKbNufft(im_size=im_size) toep_ob = ToepNufft() # precompute the embedded Toeplitz FFT kernel kern = calc_toep_kernel(adjnufft_ob, ktraj) # use FFT kernel from embedded Toeplitz matrix image = toep_ob(image, kern)
A detailed example of sparse matrix precomputation usage is included in here.
Running on the GPU
All of the examples included in this repository can be run on the GPU by sending the NUFFT object and data to the GPU prior to the function call, e.g.:
adjnufft_ob = adjnufft_ob.to(torch.device('cuda')) kdata = kdata.to(torch.device('cuda')) ktraj = ktraj.to(torch.device('cuda')) image = adjnufft_ob(kdata, ktraj)
Similar to programming lowlevel code, PyTorch will throw errors if the underlying dtype
and device
of all objects are not matching. Be sure to make sure your data and NUFFT objects are on the right device and in the right format to avoid these errors.
Computation Speed
TorchKbNufft is first and foremost designed to be lightweight with minimal dependencies outside of PyTorch. Speed compared to other packages depends on problem size and usage mode  generally, favorable performance can be observed with large problems (23 times faster than some packages with 64 coils) when using spare matrices, whereas unfavorable performance occurs with small problems in table interpolation mode (23 times as slow as other packages).
The following computation times in seconds were observed on a workstation with a Xeon E51620 CPU and an Nvidia GTX 1080 GPU for a 15coil, 405spoke 2D radial problem. CPU computations were done with 64bit floats, whereas GPU computations were done with 32bit floats (v0.2.2).
(n) = normal, (spm) = sparse matrix, (toep) = Toeplitz embedding, (f/b) = forward/backward combined
Operation  CPU (n)  CPU (spm)  CPU (toep)  GPU (n)  GPU (spm)  GPU (toep) 

Forward NUFFT  3.57  1.61  0.145 (f/b)  1.00e01  1.57e01  5.16e03 (f/b) 
Adjoint NUFFT  4.52  1.04  N/A  1.06e01  1.63e01  N/A 
Profiling for your machine can be done by running
python profile_torchkbnufft.py
You will need to install some other packages to run this profiling
pip install scikitimage Pillow
Other Packages
For users interested in NUFFT implementations for other computing platforms, the following is a partial list of other projects:
 TF KBNUFFT (KBNUFFT for TensorFlow)
 SigPy (for Numpy arrays, Numba (for CPU) and CuPy (for GPU) backends)
 FINUFFT (for MATLAB, Python, Julia, C, etc., very efficient)
 NFFT (for Julia)
 PyNUFFT (for Numpy, also has PyCUDA/PyOpenCL backends)
References

Fessler, J. A., & Sutton, B. P. (2003). Nonuniform fast Fourier transforms using minmax interpolation. IEEE transactions on signal processing, 51(2), 560574.

Beatty, P. J., Nishimura, D. G., & Pauly, J. M. (2005). Rapid gridding reconstruction with a minimal oversampling ratio. IEEE transactions on medical imaging, 24(6), 799808.

Feichtinger, H. G., Gr, K., & Strohmer, T. (1995). Efficient numerical methods in nonuniform sampling theory. Numerische Mathematik, 69(4), 423440.
Citation
If you want to cite the package, you can use any of the following:
@conference{muckley:20:tah, author = {M. J. Muckley and R. Stern and T. Murrell and F. Knoll}, title = {{TorchKbNufft}: A HighLevel, HardwareAgnostic NonUniform Fast Fourier Transform}, booktitle = {ISMRM Workshop on Data Sampling \& Image Reconstruction}, year = 2020 } @misc{Muckley2019, author = {Muckley, M.J. et al.}, title = {Torch KBNUFFT}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/mmuckley/torchkbnufft}} }
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size  File type  Python version  Upload date  Hashes 

Filename, size torchkbnufft0.3.4py3noneany.whl (35.6 kB)  File type Wheel  Python version py3  Upload date  Hashes View 
Filename, size torchkbnufft0.3.4.tar.gz (31.1 kB)  File type Source  Python version None  Upload date  Hashes View 
Hashes for torchkbnufft0.3.4py3noneany.whl
Algorithm  Hash digest  

SHA256  199b422dd771c9c601bd6e0c0e9d812050dea36141a1f9e344d3694f2bffc5ef 

MD5  8320b67d20c0d53dc4ad4128bdaca790 

BLAKE2256  25db863c10595028b92dc43029cbcdf437651325f1d31be4c2d38fbc4a7306f8 