Skip to main content

GPU-accelerated tractography package

Project description

GPUStreamlines

Installation

To install, simply run pip install . in the top-level repository directory.

Running the examples

This repository contains several example usage scripts.

The script run_gpu_streamlines.py demonstrates how to run any diffusion MRI dataset on the GPU. It can also run on the CPU for reference, if the argument --device=cpu is used. If not data is passed, it will donaload and use the HARDI dataset.

To run the baseline CPU example on a random set of 1000 seeds, this is the command and example output:

$ python run_gpu_streamlines.py --device=cpu --output-prefix small --nseeds 1000
parsing arguments
Fitting Tensor
Computing anisotropy measures (FA,MD,RGB)
slowadcodf
Bootstrap direction getter
streamline gen
Generated 2746 streamlines from 1000 seeds, time: 6.713643550872803 s
Saved streamlines to small.1_1.trk, time 0.22669768333435059 s
Completed processing 1000 seeds.
Initialization time: 12.355878829956055 sec
Streamline generation total time: 6.9404990673065186 sec
        Streamline processing: 6.713643550872803 sec
        File writing: 0.22669768333435059 sec

To run the same case on a single GPU, this is the command and example output:

$ python run_gpu_streamlines.py --output-prefix small --nseeds 1000 --ngpus 1
parsing arguments
Fitting Tensor
Computing anisotropy measures (FA,MD,RGB)
slowadcodf
Bootstrap direction getter
streamline gen
Creating GPUTracker with 1 GPUs...
Generated 2512 streamlines from 1000 seeds, time: 0.21228599548339844 s
Saved streamlines to small.1_1.trk, time 0.17112255096435547 s
Completed processing 1000 seeds.
Initialization time: 14.81659483909607 sec
Streamline generation total time: 0.3834989070892334 sec
        Streamline processing: 0.21228599548339844 sec
        File writing: 0.17112255096435547 sec
Destroy GPUTracker...

Note that if you experience memory errors, you can adjust the --chunk-size flag.

To run on more seeds, we suggest setting the --write-method trx flag in the GPU script to not get bottlenecked by writing files.

Running on AWS with Docker

First, set up an AWS instance with GPU and ssh into it (we recommend a P3 instance with at least 1 V100 16 GB GPU and a Deep Learning AMI Ubuntu 18.04 v 33.0.). Then do the following:

  1. Log in to GitHub docker registry:
$ docker login -u <github id> docker.pkg.github.com
  1. Enter your GitHub access token. If you do not have one, create it on the GitHub general security settings page and enable package read access for that token.
  2. Pull the container:
$ docker pull docker.pkg.github.com/dipy/gpustreamlines/gpustreamlines:latest
  1. Run the code, mounting the current directory into the container for easy result retrieval:
$ docker run --gpus=all -v ${PWD}:/opt/exec/output:rw -it docker.pkg.github.com/dipy/gpustreamlines/gpustreamlines:latest \
 python run_gpu_streamlines.py --ngpus 1 --output-prefix output/hardi_gpu_full --use-fast-write
  1. The code produces a number of independent track files (one per processed "chunk"), but we have provided a merge script to combine them into a single trk file. To merge files, run:
$ docker run --gpus=all -v ${PWD}:/opt/exec/output:rw -it docker.pkg.github.com/dipy/gpustreamlines/gpustreamlines:latest \
 ./merge_trk.sh -o output/hardi_tracks.trk output/hardi_gpu_full*

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cuslines-2.0.0.tar.gz (49.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cuslines-2.0.0-py3-none-any.whl (53.1 kB view details)

Uploaded Python 3

File details

Details for the file cuslines-2.0.0.tar.gz.

File metadata

  • Download URL: cuslines-2.0.0.tar.gz
  • Upload date:
  • Size: 49.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for cuslines-2.0.0.tar.gz
Algorithm Hash digest
SHA256 61571341ddb491abd2e3a21773777ffbdf031c40e0946cb2a00fe2bb3564f26d
MD5 ce08570e25201ebd0fd4f22384faa7b6
BLAKE2b-256 763f4eb60218386dae454b7c79244da3c94b3bbae49f47724f176a6521db3496

See more details on using hashes here.

File details

Details for the file cuslines-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: cuslines-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 53.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for cuslines-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b2acccd389c42f3eff789a7327ddccc1d9bc49c243346b31766b9c4795586661
MD5 577a282541fc040bd8e7fc3a4c788092
BLAKE2b-256 82e495066e8d7cd1a2cd56b06fecf7dca0febe22a7ca61ef4457239e2fd65a75

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page