Skip to main content

NeRF-US Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild

Project description

NeRF-US๐Ÿ‘ฅ: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild

Rishit Dagli1,2, Atsuhiro Hibi2,3,4, Rahul G. Krishnan1,5, Pascal Tyrrell2,4,6

Departments of 1 Computer Science; 2 Medical Imaging, University of Toronto, Canada
3 Division of Neurosurgery, St Michael's Hospital, Unity Health Toronto, Canada
4 Institute of Medical Science; Departments of 5 Laboratory Medicine and Pathobiology; 6 Statistical Sciences, University of Toronto, Canada

Twitter Paper PDF Project Page

This work presents NeRF-US, a method to train NeRFs in-the-wild for sound fields like ultrasound imaging data. Check out our website to view some results of this work.

This codebase is forked from the awesome Ultra-NeRF and Nerfbusters repository.

Installation

  1. First, install the pip package by running:
pip install nerfus

or you could also install the package from source:

git clone https://github.com/Rishit-Dagli/nerf-us
cd nerf-us
pip install -e .
  1. Now install the dependencies, if you use the virtualenv you could run:
pip install -r requirements.txt

If you use conda you could run:

conda env create -f environment.yml
conda activate nerfus
  1. Install Nerfstudio and dependencies. Installation guide can be found install nerfstudio

We also use the branch nerfbusters-changes. You may have to run the viewer locally if you want full functionality.

cd path/to/nerfstudio
pip install -e .
pip install torch==1.13.1 torchvision functorch --extra-index-url https://download.pytorch.org/whl/cu117
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
  1. Install binvox to voxelize cubes
mkdir bins
cd bins
wget -O binvox https://www.patrickmin.com/binvox/linux64/binvox?rnd=16811490753710
cd ../
chmod +x bins/binvox

Check out the Tips section for tips on installing the requirements.

Overview of Codebase

For data preparation, the cubes directory contains modules for processing 3D data, including dataset handling (datasets3D.py), rendering (render.py), and visualization (visualize3D.py). The data_modules directory further supports data management with modules for 3D cubes and a general datamodule for the diffusion model.

The diffusion model is primarily implemented in the models directory, which includes the core model definition (model.py), U-Net architecture (unet.py), and related utilities. The lightning directory contains the training logic for the diffusion model, including loss functions (dsds_loss.py) and the trainer module (nerfus_trainer.py). The NeRF component is housed in the nerf directory, which includes experiment configurations, utility functions, and the main pipeline for NeRF-US (nerfus_pipeline.py).

.
โ”œโ”€โ”€ config (configuration files for the datasets and models)
โ”‚   โ”œโ”€โ”€ shapenet.yaml (configuration file for the shapenet dataset)
โ”‚   โ””โ”€โ”€ synthetic-knee.yaml (configuration file for the diffusion model)
โ”œโ”€โ”€ environment.yml (conda environment file)
โ”œโ”€โ”€ nerfus (main codebase)
โ”‚   โ”œโ”€โ”€ bins
โ”‚   โ”‚   โ””โ”€โ”€ binvox (binvox executable)
โ”‚   โ”œโ”€โ”€ cubes (making cubes from 3D data)
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ binvox_rw.py
โ”‚   โ”‚   โ”œโ”€โ”€ datasets3D.py
โ”‚   โ”‚   โ”œโ”€โ”€ render.py
โ”‚   โ”‚   โ”œโ”€โ”€ utils.py
โ”‚   โ”‚   โ””โ”€โ”€ visualize3D.py
โ”‚   โ”œโ”€โ”€ data
โ”‚   โ”œโ”€โ”€ data_modules (data modules for cubes)
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ cubes3d.py
โ”‚   โ”‚   โ””โ”€โ”€ datamodule.py (data module for diffusion model)
โ”‚   โ”œโ”€โ”€ download_nerfus_dataset.py (script to download the diffusion model dataset)
โ”‚   โ”œโ”€โ”€ lightning (training lightning modules for diffusion model)
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ dsds_loss.py (loss for diffusion model)
โ”‚   โ”‚   โ””โ”€โ”€ nerfus_trainer.py (training code for diffusion model)
โ”‚   โ”œโ”€โ”€ models (model definition for diffusion model)
โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”œโ”€โ”€ fp16_util.py
โ”‚   โ”‚   โ”œโ”€โ”€ model.py
โ”‚   โ”‚   โ”œโ”€โ”€ nn.py
โ”‚   โ”‚   โ””โ”€โ”€ unet.py
โ”‚   โ”œโ”€โ”€ nerf (main codebase for the NeRF)
โ”‚   โ”‚   โ”œโ”€โ”€ experiment_configs (configurations for the Nerfacto experiments)
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ __init__.py
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ nerfacto_experiments.py
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ utils.py
โ”‚   โ”‚   โ”œโ”€โ”€ nerfbusters_utils.py (utils for nerfbusters)
โ”‚   โ”‚   โ”œโ”€โ”€ nerfus_config.py (nerfstudio method configurations for the NeRF-US)
โ”‚   โ”‚   โ””โ”€โ”€ nerfus_pipeline.py (pipeline for NeRF-US)
โ”‚   โ”œโ”€โ”€ run.py (training script for diffusion model)
โ”‚   โ””โ”€โ”€ utils (utility functions for the NeRF training)
โ”‚       โ”œโ”€โ”€ __init__.py
โ”‚       โ”œโ”€โ”€ metrics.py
โ”‚       โ”œโ”€โ”€ utils.py
โ”‚       โ””โ”€โ”€ visualizations.py
โ””โ”€โ”€ requirements.txt (requirements file we use)

Usage

Training the Diffusion Model

First, download either the synthetic knee cubes or the synthetic phantom cubes dataset:

.
โ”œโ”€โ”€ config
โ”‚   โ”œโ”€โ”€ shapenet.yaml
โ”‚   โ””โ”€โ”€ synthetic-knee.yaml
โ”œโ”€โ”€ nerfus
โ”‚   โ”œโ”€โ”€ bins
โ”‚   โ”‚   โ””โ”€โ”€ binvox
โ”‚   โ”œโ”€โ”€ data
โ”‚   |   โ”œโ”€โ”€ syn-knee
|   |   โ””โ”€โ”€ syn-spi

We can now train the 3D diffusion model using the following command:

python nerfus/run.py --config config/synthetic-knee.yaml --name synthetic-knee-experiment --pt

This also automatically downloads Nerfbusters checkpoint on which we run adaptation.

Training the NeRF

Contrary to many other NeRF + Diffusion models we do not first train a NeRF and then continue training with the diffusion model as regularizee. Instead, we train we train it with the diffusion model from scratch.

We run the training using our method using Nerfstudio commands:

ns-train nerfus --data path/to/data nerfstudio-data --eval-mode train-split-fraction

For our baselines, and experiments we directly use the Nerfstudio commands to train on the 10 individual datasets. For our ablation study, we do 3 ablations:

  1. for training without the border probability we just set the corresponding lambda to 0 (this could easily be made faster)
  2. for training without the scaterring density we just set the corresponding lambda to 0 (this could easily be made faster)
  3. for training without ultrasound rendering, we just use standard nerstudio commands

We can use any other Nerfstudio commands as well. For instance, rendering across a path:

ns-render --load-config path/to/config.yml  --traj filename --camera-path-filename path/to/camera-path.json --output-path renders/my-render.mp4

or computing metrics:

ns-render --load-config path/to/config.yml --output-path path/to/output

Tips

We share some tips on running the code and reproducing our results.

on installing required packages

  • Installing Nerfstudio especially on HPC systems can be tricky. We recommend installing open3d, and tiny-cuda-nn before installing Nerfstudio separately and install it from source. We also recommend building these packages on the same GPU you plan to run it on.
  • When you install PyTorch especially on HPC systems, you will often end up with atleast two version of CUDA: one which is installed when you install PyTorch and is not a full version of CUDA and the other which is in the system. We highly recommend manually installing the same version of CUDA as in the system that PyTorch automatically installs.
  • There are some parts of the code that do not run properly with PyTorch 2.x.
  • We use virtualenv and use the requirements.txt file to install the required packages. While, we provide a conda environment.yml (especially due to some Nerfstudio problems people might face), we have not tested this but expect it to work.

on compute

  • We have currently optimized the code for and run all of the experiments on a A100 - 80 GB GPU. However, we have also tested the code on a A100 - 40 GB GPU where the inference and evaluation seem to work pretty well.
  • In general, we would recommend a GPU with at least above 40 GB vRAM.
  • We would recommend having at least 32 GB CPU RAM for the code to work well.
  • While training the diffusion model, we recommend doing full-precision adaptation and not use FP-16.

Credits

This code base is built on top of, and thanks to them for maintaining the repositories:

Citation

If you find NeRF-US helpful, please consider citing:

@misc{dagli2024nerfusremovingultrasoundimaging,
      title={NeRF-US: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild}, 
      author={Rishit Dagli and Atsuhiro Hibi and Rahul G. Krishnan and Pascal N. Tyrrell},
      year={2024},
      eprint={2408.10258},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2408.10258}, 
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nerfus-0.1.0.tar.gz (605.0 kB view details)

Uploaded Source

Built Distribution

nerfus-0.1.0-py3-none-any.whl (609.5 kB view details)

Uploaded Python 3

File details

Details for the file nerfus-0.1.0.tar.gz.

File metadata

  • Download URL: nerfus-0.1.0.tar.gz
  • Upload date:
  • Size: 605.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for nerfus-0.1.0.tar.gz
Algorithm Hash digest
SHA256 5ccddd1979a1f9c24357e6f26a1c397f746dbc2e3148a65b174eef791ca59e80
MD5 c98a58be7ee92a2f8837168d174ebe11
BLAKE2b-256 02d7b12fb37e5b292881938c55323a1831302a61b91e820937fbaf90f5034a57

See more details on using hashes here.

File details

Details for the file nerfus-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: nerfus-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 609.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.19

File hashes

Hashes for nerfus-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 67fc18f9b009ff5648bab274ded94ae837ed78999cd368ceca2220c2b778c41c
MD5 3f243fdea7ff47939218d88fc9aee158
BLAKE2b-256 289d49bce66aec3ce941295b97cdad1428bcd887b580aad3a74ff3c8dea66a86

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page