An extension to PyLops for linear operators on GPUs.
Project description
:vertical_traffic_light: :vertical_traffic_light: This library is under early development. Expect things to constantly change until version v1.0.0. :vertical_traffic_light: :vertical_traffic_light:
Objective
This library is an extension of PyLops to run operators on GPUs.
As much as numpy and scipy lie at the core of the parent project PyLops, PyLopsGPU heavily builds on top of PyTorch and takes advantage of the same optimized tensor computations used in PyTorch for deep learning using GPUs and CPUs.
Doing so, linear operators can be computed on GPUs.
Here is a simple example showing how a diagonal operator can be created, applied and inverted using PyLops:
import numpy as np
from pylops import Diagonal
n = int(1e6)
x = np.ones(n)
d = np.arange(n) + 1.
Dop = Diagonal(d)
# y = Dx
y = Dop*x
and similarly using PyLopsgpu:
import numpy as np
import torch
from pylops_gpu.utils.backend import device
from pylops_gpu import Diagonal
dev = device()
n = int(1e6)
x = torch.ones(n, dtype=torch.float64).to(dev)
d = (torch.arange(0, n, dtype=torch.float64) + 1.).to(dev)
Dop = Diagonal(d, device=dev)
# y = Dx
y = Dop*x
Running these two snippets of code in Google Colab with GPU enabled gives a 50+ speed up for the forward pass.
As a byproduct of implementing PyLops linear operators in PyTorch, we can easily
chain our operators with any nonlinear mathematical operation (e.g., log, sin, tan, pow, ...)
as well as with operators from the torch.nn
submodule and obtain Automatic
Differentiation (AD) for the entire chain. Since the gradient of a linear
operator is simply its adjoint, we have implemented a single class,
pylops_gpu.TorchOperator
, which can wrap any linear operator
from PyLops and PyLopsgpu libraries and return a torch.autograd.Function
object.
Project structure
This repository is organized as follows:
 pylops_gpu: python library containing various GPUpowered linear operators and auxiliary routines
 pytests: set of pytests
 testdata: sample datasets used in pytests and documentation
 docs: sphinx documentation
 examples: set of python script examples for each linear operator to be embedded in documentation using sphinxgallery
 tutorials: set of python script tutorials to be embedded in documentation using sphinxgallery
Getting started
You need Python 3.5 or greater.
From PyPi
If you want to use PyLopsgpu within your codes, install it in your Pythongpu environment by typing the following command in your terminal:
pip install pylopsgpu
Open a python terminal and type:
import pylops_gpu
If you do not see any error, you should be good to go, enjoy!
From Github
You can also directly install from the master node
pip install git+https://git@github.com/PyLops/pylopsgpu.git@master
Contributing
Feel like contributing to the project? Adding new operators or tutorial?
Follow the instructions from PyLops official documentation.
Documentation
The official documentation of PyLopsgpu is available here.
Visit this page to get started learning about different operators and their applications as well as how to
create new operators yourself and make it to the Contributors
list.
Moreover, if you have installed PyLops using the developer environment you can also build the documentation locally by typing the following command:
make doc
Once the documentation is created, you can make any change to the source code and rebuild the documentation by simply typing
make docupdate
Note that if a new example or tutorial is created (and if any change is made to a previously available example or tutorial) you are required to rebuild the entire documentation before your changes will be visible.
History
PyLopsGPU was initially written and it is currently maintained by Equinor. It is an extension of PyLops for largescale optimization with GPUdriven linear operators on that can be tailored to our needs, and as contribution to the free software community.
Contributors
 Matteo Ravasi, mrava87
 Francesco Picetti, fpicetti
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pylops_gpu0.0.1py3noneany.whl
Algorithm  Hash digest  

SHA256  c25f2d151e6f725b506068d34f16780b8ad1b436598a5f8414ebeecce2b7867a 

MD5  93160283b4de90d8415817556243cb91 

BLAKE2b256  6598006e67a590e7c7bae298f44524abf312bc801af59b6ee416d85a74c5bb00 