Skip to main content

An extension to PyLops for linear operators on GPUs.

Project description

PyLops-gpu

PyPI version Build Status AzureDevOps Status Documentation Status OS-support Slack Status

:vertical_traffic_light: :vertical_traffic_light: This library is under early development. Expect things to constantly change until version v1.0.0. :vertical_traffic_light: :vertical_traffic_light:

Objective

This library is an extension of PyLops to run operators on GPUs.

As much as numpy and scipy lie at the core of the parent project PyLops, PyLops-GPU heavily builds on top of PyTorch and takes advantage of the same optimized tensor computations used in PyTorch for deep learning using GPUs and CPUs.

Doing so, linear operators can be computed on GPUs.

Here is a simple example showing how a diagonal operator can be created, applied and inverted using PyLops:

import numpy as np
from pylops import Diagonal

n = int(1e6)
x = np.ones(n)
d = np.arange(n) + 1.

Dop = Diagonal(d)

# y = Dx
y = Dop*x

and similarly using PyLops-gpu:

import numpy as np
import torch
from pylops_gpu.utils.backend import device
from pylops_gpu import Diagonal

dev = device()

n = int(1e6)
x = torch.ones(n, dtype=torch.float64).to(dev)
d = (torch.arange(0, n, dtype=torch.float64) + 1.).to(dev)

Dop = Diagonal(d, device=dev)

# y = Dx
y = Dop*x

Running these two snippets of code in Google Colab with GPU enabled gives a 50+ speed up for the forward pass.

As a by-product of implementing PyLops linear operators in PyTorch, we can easily chain our operators with any nonlinear mathematical operation (e.g., log, sin, tan, pow, ...) as well as with operators from the torch.nn submodule and obtain Automatic Differentiation (AD) for the entire chain. Since the gradient of a linear operator is simply its adjoint, we have implemented a single class, pylops_gpu.TorchOperator, which can wrap any linear operator from PyLops and PyLops-gpu libraries and return a torch.autograd.Function object.

Project structure

This repository is organized as follows:

  • pylops_gpu: python library containing various GPU-powered linear operators and auxiliary routines
  • pytests: set of pytests
  • testdata: sample datasets used in pytests and documentation
  • docs: sphinx documentation
  • examples: set of python script examples for each linear operator to be embedded in documentation using sphinx-gallery
  • tutorials: set of python script tutorials to be embedded in documentation using sphinx-gallery

Getting started

You need Python 3.5 or greater.

From PyPi

If you want to use PyLops-gpu within your codes, install it in your Python-gpu environment by typing the following command in your terminal:

pip install pylops-gpu

Open a python terminal and type:

import pylops_gpu

If you do not see any error, you should be good to go, enjoy!

From Github

You can also directly install from the master node

pip install git+https://git@github.com/PyLops/pylops-gpu.git@master

Contributing

Feel like contributing to the project? Adding new operators or tutorial?

Follow the instructions from PyLops official documentation.

Documentation

The official documentation of PyLops-gpu is available here.

Visit this page to get started learning about different operators and their applications as well as how to create new operators yourself and make it to the Contributors list.

Moreover, if you have installed PyLops using the developer environment you can also build the documentation locally by typing the following command:

make doc

Once the documentation is created, you can make any change to the source code and rebuild the documentation by simply typing

make docupdate

Note that if a new example or tutorial is created (and if any change is made to a previously available example or tutorial) you are required to rebuild the entire documentation before your changes will be visible.

History

PyLops-GPU was initially written and it is currently maintained by Equinor. It is an extension of PyLops for large-scale optimization with GPU-driven linear operators on that can be tailored to our needs, and as contribution to the free software community.

Contributors

  • Matteo Ravasi, mrava87
  • Francesco Picetti, fpicetti

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pylops_gpu-0.0.1.tar.gz (30.8 kB view details)

Uploaded Source

Built Distribution

pylops_gpu-0.0.1-py3-none-any.whl (40.6 kB view details)

Uploaded Python 3

File details

Details for the file pylops_gpu-0.0.1.tar.gz.

File metadata

  • Download URL: pylops_gpu-0.0.1.tar.gz
  • Upload date:
  • Size: 30.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.7.9

File hashes

Hashes for pylops_gpu-0.0.1.tar.gz
Algorithm Hash digest
SHA256 aaac89ee21877c9254d510c3bb39daec9e5ce236128c2cea891389f029c5bd6f
MD5 9f5a8dd9c5287942e2bb91ad899ec7dd
BLAKE2b-256 51feec835fdf83b75646c16f49a9607031a54738e55653f29a7470c0e8f2f057

See more details on using hashes here.

File details

Details for the file pylops_gpu-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: pylops_gpu-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 40.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.7.9

File hashes

Hashes for pylops_gpu-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c25f2d151e6f725b506068d34f16780b8ad1b436598a5f8414ebeecce2b7867a
MD5 93160283b4de90d8415817556243cb91
BLAKE2b-256 6598006e67a590e7c7bae298f44524abf312bc801af59b6ee416d85a74c5bb00

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page