Skip to main content

Package providing torch-based numerical integration methods.

Project description

torchquad

Documentation Status


Logo

High-performance numerical integration on the GPU with PyTorch
Explore the docs »

View Example notebook · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Goals
  3. Getting Started
  4. Usage
  5. Roadmap
  6. Contributing
  7. License
  8. FAQ
  9. Contact

About The Project

The torchquad module allows utilizing GPUs for efficient numerical integration with PyTorch. The software is free to use and is designed for the machine learning community and research groups focusing on topics requiring high-dimensional integration.

Built With

This project is built with the following packages:

Goals

  • Progressing science: Multidimensional integration is needed in many fields of physics (from particle physics to astrophysics), in applied finance, in medical statistics, and so on. With torchquad, we wish to reach research groups in such fields, as well as the general machine learning community.
  • Withstanding the curse of dimensionality: The curse of dimensionality makes deterministic methods in particular, but also stochastic ones, extremely slow when the dimensionality increases. This gives the researcher a choice between computationally heavy and time-consuming simulations on the one hand and inaccurate evaluations on the other. Luckily, many integration methods are embarrassingly parallel, which means they can strongly benefit from GPU parallelization. The curse of dimensionality still applies, but GPUs can handle the problem much better than CPUs can.
  • Delivering a convenient and functional tool: torchquad is built with PyTorch, which means it is fully differentiable. Furthermore, the library of available and upcoming methods in torchquad offers high-effeciency integration for any need.

Getting Started

This is a brief guide for how to set up torchquad.

Prerequisites

We recommend using conda, especially if you want to utilize the GPU. It will automatically set up CUDA and the cudatoolkit for you in that case. Note that torchquad also works on the CPU. However, it is optimized for GPU usage.

  • conda, which will take care of all requirements for you. For a detailed list of required packages, please refer to the conda environment file.

Installation

  1. Get miniconda or similar
  2. Clone the repo
    git clone https://github.com/esa/torchquad.git
    
  3. Setup the environment. This will create a conda environment called torchquad
    conda env create -f environment.yml
    

Alternatively you can use

pip install torchquad

NB Note that pip will not set up PyTorch with CUDA and GPU support. Therefore, we recommend to use conda.

GPU Utilization

With conda you can install the GPU version of PyTorch with conda install pytorch cudatoolkit -c pytorch. For alternative installation procedures please refer to the PyTorch Documentation.

Usage

This is a brief example how torchquad can be used to compute a simple integral. For a more thorough introduction please refer to the example notebook.

The full documentation can be found on readthedocs.

# To avoid copying things to GPU memory, 
# ideally allocate everything in torch on the GPU
# and avoid non-torch function calls
import torch 
from torchquad import MonteCarlo

# The function we want to integrate, in this example f(x,y) = sin(x) + e^y
def some_function(x):
    return torch.sin(x[0]) + torch.exp(x[1])

# Declare an integrator, here we use the simple, stochastic Monte Carlo integration method
mc = MonteCarlo()

# Compute the function integral by sampling 10000 points over domain 
integral_value = mc.integrate(some_function,dim=2,N=10000,integration_domain = [[0,1],[-1,1]])

You can find all available integrators here.

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

The project is open to community contributions. Feel free to open an issue or write us an email if you would like to discuss a problem or idea first.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the GPL-3.0 License. See LICENSE for more information.

FAQ

  1. Q: Error enabling CUDA. cuda.is_available() returned False. CPU will be used.
    A: This error indicates that no CUDA-compatible GPU could be found. Either you have no compatible GPU or the necessary CUDA requirements are missing. Using conda, you can install them with conda install cudatoolkit. For more detailed installation instructions, please refer to the PyTorch documentation.

Contact

Created by ESA's Advanced Concepts Team

  • Pablo Gómez - pablo.gomez at esa.int
  • Gabriele Meoni - gabriele.meoni at esa.int
  • Håvard Hem Toftevaag - havard.hem.toftevaag at esa.int

Project Link: https://github.com/esa/torchquad

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchquad-0.2.1.tar.gz (32.7 kB view details)

Uploaded Source

Built Distribution

torchquad-0.2.1-py3-none-any.whl (36.7 kB view details)

Uploaded Python 3

File details

Details for the file torchquad-0.2.1.tar.gz.

File metadata

  • Download URL: torchquad-0.2.1.tar.gz
  • Upload date:
  • Size: 32.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.10

File hashes

Hashes for torchquad-0.2.1.tar.gz
Algorithm Hash digest
SHA256 eff735b31faf878b705120ed4137dd9ebc73c0caf5143d7e90cc5dc2e6b0e2d2
MD5 9beb32e29568870d8587f39b85d2abad
BLAKE2b-256 282eff7228772d0697d03a445a944373cd0e2ca27290d1119077607d4b940a6a

See more details on using hashes here.

File details

Details for the file torchquad-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: torchquad-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 36.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.5.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.10

File hashes

Hashes for torchquad-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d4c44935444dfa6ea1c8a20c2132386ab57198664477263409549831993e9ea7
MD5 059fb4df9ea79ed4e742237a6c00c416
BLAKE2b-256 33346f35ac63b32d819f109f3cc1d7dcdf3e565d77fbe79ab942f60f5f427b4a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page