Package providing torch-based numerical integration methods.
Project description
torchquad
High-performance numerical integration on the GPU with PyTorch
Explore the docs »
View Example notebook
·
Report Bug
·
Request Feature
Table of Contents
About The Project
The torchquad module allows utilizing GPUs for efficient numerical integration with PyTorch. The software is free to use and is designed for the machine learning community and research groups focusing on topics requiring high-dimensional integration.
Built With
This project is built with the following packages:
- PyTorch, which means it is fully differentiable and can be used for machine learning, and
- conda, which will take care of all requirements for you.
Goals
- Supporting science: Multidimensional numerical integration is needed in many fields, such as physics (from particle physics to astrophysics), in applied finance, in medical statistics, and others. torchquad aims to assist research groups in such fields, as well as the general machine learning community.
- Withstanding the curse of dimensionality: The curse of dimensionality makes deterministic methods in particular, but also stochastic ones, computationally expensive when the dimensionality increases. However, many integration methods are embarrassingly parallel, which means they can strongly benefit from GPU parallelization. The curse of dimensionality still applies but the improved scaling alleviates the computational impact.
- Enabling full differentiability: In line with recent trends (e.g. JAX torchquad builds on PyTorch in such a way that it is fully differentiable. This enables a broad range of optimization and machine learning applications.
Getting Started
This is a brief guide for how to set up torchquad.
Prerequisites
We recommend using conda, especially if you want to utilize the GPU. It will automatically set up CUDA and the cudatoolkit for you in that case. Note that torchquad also works on the CPU; however, it is optimized for GPU usage. Currently torchquad only supports NVIDIA cards with CUDA. We are investigating future support for AMD cards through ROCm.
For a detailed list of required packages, please refer to the conda environment file.
Installation
The easiest way to install torchquad is simply to
conda install torchquad -c conda-forge -c pytorch
Note that since PyTorch is not yet on conda-forge for Windows, we have explicitly included it here using -c pytorch
.
Alternatively, it is also possible to use
pip install torchquad
NB Note that pip will not set up PyTorch with CUDA and GPU support. Therefore, we recommend to use conda.
Test
After installing torchquad
through conda
or pip
, users can test its correct installation with:
import torchquad
torchquad._deployment_test()
After cloning the repository, developers can check the functionality of torchquad
by running the following command in the torchquad/tests
directory:
pytest
GPU Utilization
With conda you can install the GPU version of PyTorch with conda install pytorch cudatoolkit -c pytorch
.
For alternative installation procedures please refer to the PyTorch Documentation.
Usage
This is a brief example how torchquad can be used to compute a simple integral. For a more thorough introduction please refer to the tutorial section in the documentation.
The full documentation can be found on readthedocs.
# To avoid copying things to GPU memory,
# ideally allocate everything in torch on the GPU
# and avoid non-torch function calls
import torch
from torchquad import MonteCarlo, enable_cuda
# Enable GPU support if available
enable_cuda()
# The function we want to integrate, in this example f(x0,x1) = sin(x0) + e^x1 for x0=[0,1] and x1=[-1,1]
# Note that the function needs to support multiple evaluations at once (first dimension of x here)
# Expected result here is ~3.2698
def some_function(x):
return torch.sin(x[:,0]) + torch.exp(x[:,1])
# Declare an integrator, here we use the simple, stochastic Monte Carlo integration method
mc = MonteCarlo()
# Compute the function integral by sampling 10000 points over domain
integral_value = mc.integrate(some_function,dim=2,N=10000,integration_domain = [[0,1],[-1,1]])
You can find all available integrators here.
Roadmap
See the open issues for a list of proposed features (and known issues).
Performance
Using GPUs torchquad scales particularly well with integration methods that offer easy parallelization. For example, below you see error and runtime results for integrating the function f(x,y,z) = sin(x * (y+1)²) * (z+1)
on a consumer-grade desktop PC.
Runtime results of the integration. Note the far superior scaling on the GPU (solid line) in comparison to the CPU (dashed and dotted) for both methods.
Convergence results of the integration. Note that Simpson quickly reaches floating point precision. Monte Carlo is not competitive here given the low dimensionality of the problem.
Contributing
The project is open to community contributions. Feel free to open an issue or write us an email if you would like to discuss a problem or idea first.
If you want to contribute, please
- Fork the project on GitHub.
- Get the most up-to-date code by following this quick guide for installing torchquad from source:
- Get miniconda or similar
- Clone the repo
git clone https://github.com/esa/torchquad.git
- Setup the environment. This will create a conda environment called
torchquad
conda env create -f environment.yml conda activate torchquad
Once the installation is done, then you are ready to contribute.
Please note that PRs should be created from and into the develop
branch. For each release the develop branch is merged into main.
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request on the
develop
branch, notmain
(NB: We autoformat every PR with black. Our GitHub actions may create additional commits on your PR for that reason.)
and we will have a look at your contribution as soon as we can.
Furthermore, please make sure that your PR passes all automated tests. Review will only happen after that.
Only PRs created on the develop
branch with all tests passing will be considered. The only exception to this rule is if you want to update the documentation in relation to the current release on conda / pip. In that case you may ask to merge directly into main
.
License
Distributed under the GPL-3.0 License. See LICENSE for more information.
FAQ
- Q:
Error enabling CUDA. cuda.is_available() returned False. CPU will be used.
A: This error indicates that no CUDA-compatible GPU could be found. Either you have no compatible GPU or the necessary CUDA requirements are missing. Usingconda
, you can install them withconda install cudatoolkit
. For more detailed installation instructions, please refer to the PyTorch documentation.
Contact
Created by ESA's Advanced Concepts Team
- Pablo Gómez -
pablo.gomez at esa.int
- Gabriele Meoni -
gabriele.meoni at esa.int
- Håvard Hem Toftevaag -
havard.hem.toftevaag at esa.int
Project Link: https://github.com/esa/torchquad
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file torchquad-0.2.3.tar.gz
.
File metadata
- Download URL: torchquad-0.2.3.tar.gz
- Upload date:
- Size: 36.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.1 CPython/3.8.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | afae038235bed591246a159e84768ba2776c82332385c41dc9fd2cfbbaf88a45 |
|
MD5 | 96265ba628f78e331e51b120afb6edf9 |
|
BLAKE2b-256 | 57375051504951b5ee60e8a701e60d6370579bcd0afd8acdcfdde39f66606168 |
File details
Details for the file torchquad-0.2.3-py3-none-any.whl
.
File metadata
- Download URL: torchquad-0.2.3-py3-none-any.whl
- Upload date:
- Size: 38.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.2 importlib_metadata/4.6.4 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.1 CPython/3.8.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 37306fe6bee7c529d39024e92a6cb3eee9cb42284fb97284bf32193e2f98016f |
|
MD5 | 9f024ec8bf44f06d4eedc0fb7a29186d |
|
BLAKE2b-256 | eb40850ade708f85aed735494ba4dfb8aa7128c6579b18117a12ce3bbde8b8e0 |