Skip to main content

A package for SpachTransformer and related models

Project description

Spach Transformer: Spatial and Channel-wise Transformer Based on Local and Global Self-attentions for PET Image Denoising

Se-In Jang, Tinsu Pan, Gary Y. Li, Pedram Heidari, Junyu Chen, Quanzheng Li, and Kuang Gong

paper

News

  • Nov 2023: Accepted in IEEE Transactions on Medical Imaging! [Paper]

Brief Introduction

  • The focus of this project is on handling 3D PET input data.
  • It incorporates a 3D-based approach, utilizing both the Swin Transformer and Restormer architectures specifically adapted for 3D data processing.

Installation

See INSTALL.md for the installation of dependencies required to run Spach Transformer. Do the following for a newer GPU (after activating your conda)

pip install --upgrade torch torchvision torchaudio

Quick Run with a single sample

import torch
from models.SpachTransformer  import SpachTransformer
from models.Restormer         import Restormer

input   = torch.rand(1, 1, 96, 96, 96)
model1  = SpachTransformer()
output  = model1(input)

model2  = Restormer()
output  = model2(input)

Quick Run with a training code

# if your input is about 48
python train.py --simulated_img_size 48 --num_epochs 25 --batch_size 1 --learning_rate 0.0001

# if your input is about 96
python train.py --simulated_img_size 96 --num_epochs 25 --batch_size 1 --learning_rate 0.0001

# if your input is about 128
python train.py --simulated_img_size 128 --num_epochs 25 --batch_size 1 --learning_rate 0.0001 

Abstract: Position emission tomography (PET) is widely used in clinics and research due to its quantitative merits and high sensitivity, but suffers from low signal-to-noise ratio (SNR). Recently convolutional neural networks (CNNs) have been widely used to improve PET image quality. Though successful and efficient in local feature extraction, CNN cannot capture long-range dependencies well due to its limited receptive field. Global multi-head self-attention (MSA) is a popular approach to capture long-range information. However, the calculation of global MSA for 3D images has high computational costs. In this work, we proposed an efficient spatial and channel-wise encoder-decoder transformer, Spach Transformer, that can leverage spatial and channel information based on local and global MSAs. Experiments based on datasets of different PET tracers, i.e., 18F-FDG, 18F-ACBC, 18F-DCFPyL, and 68Ga-DOTATATE, were conducted to evaluate the proposed framework. Quantitative results show that the proposed Spach Transformer can achieve better performance than other reference methods.


Citation

If you use Spach Transformer, please consider citing:

@article{jang2022spach, 
    title={Spach Transformer: Spatial and channel-wise transformer based on local and global self-attentions for PET image denoising}, 
    author={Jang, Se-In and Pan, Tinsu and Li, Ye and Heidari, Pedram and Chen, Junyu and Li, Quanzheng and Gong, Kuang}, 
    journal={arXiv preprint arXiv:2209.03300}, 
    year={2022} }
}

Contact

Should you have any question, please contact sein.jang@yale.edu

Acknowledgment: This code is based on the Restormer and Swin Transformer.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spach_transformer-0.1.0.tar.gz (15.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spach_transformer-0.1.0-py3-none-any.whl (15.8 kB view details)

Uploaded Python 3

File details

Details for the file spach_transformer-0.1.0.tar.gz.

File metadata

  • Download URL: spach_transformer-0.1.0.tar.gz
  • Upload date:
  • Size: 15.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for spach_transformer-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0b9b2163eb9c4cbd2372ea935af09b924b83c8c18ab045e71c5bfe8ec7c149ae
MD5 b45ac403be54cb0d2b7edb5a97a63fad
BLAKE2b-256 65385e181c9fa2efa9a8feb89cfbf187a346d7d11241f5411a94c4d04f1f01e0

See more details on using hashes here.

File details

Details for the file spach_transformer-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for spach_transformer-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 322eb688e3ff3c17ea5005ac9891da244e7e9045e7a36b6cba60f13baf762dbb
MD5 af722020c993b9bca714e3226363a53f
BLAKE2b-256 4592e5a02aa67bec94742a80e56bf961a0d1fed414cef440cfb5553acec29761

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page