Skip to main content

A framework for high-performance data analytics and machine learning.

Project description


Heat is a distributed tensor framework for high performance data analytics.

Project Status

CPU/CUDA/ROCm tests Documentation Status coverage license: MIT PyPI Version Downloads Anaconda-Server Badge fair-software.eu OpenSSF Scorecard OpenSSF Best Practices DOI Benchmarks Code style: black JuRSE Code Pick of the Month

Table of Contents

What is Heat for?

Heat builds on PyTorch and mpi4py to provide high-performance computing infrastructure for memory-intensive applications within the NumPy/SciPy ecosystem.

With Heat you can:

  • port existing NumPy/SciPy code from single-CPU to multi-node clusters with minimal coding effort;
  • exploit the entire, cumulative RAM of your many nodes for memory-intensive operations and algorithms;
  • run your NumPy/SciPy code on GPUs (CUDA, ROCm, coming up: Apple MPS).

For a example that highlights the benefits of multi-node parallelism, hardware acceleration, and how easy this can be done with the help of Heat, see, e.g., our blog post on trucated SVD of a 200GB data set.

Check out our coverage tables to see which NumPy, SciPy, scikit-learn functions are already supported.

If you need a functionality that is not yet supported:

Check out our features and the Heat API Reference for a complete list of functionalities.

Features

  • High-performance n-dimensional arrays
  • CPU, GPU, and distributed computation using MPI
  • Powerful data analytics and machine learning methods
  • Seamless integration with the NumPy/SciPy ecosystem
  • Python array API (work in progress)

Getting Started

Go to Quick Start for a quick overview. For more details, see Installation.

You can test your setup by running the heat_test.py script:

mpirun -n 2 python heat_test.py

It should print something like this:

x is distributed:  True
Global DNDarray x:  DNDarray([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=ht.int32, device=cpu:0, split=0)
Global DNDarray x:
Local torch tensor on rank  0 :  tensor([0, 1, 2, 3, 4], dtype=torch.int32)
Local torch tensor on rank  1 :  tensor([5, 6, 7, 8, 9], dtype=torch.int32)

Check out our Jupyter Notebook Tutorials, choose local to try things out on your machine, or hpc if you have access to an HPC system.

The complete documentation of the latest version is always deployed on Read the Docs.

Installation

Requirements

Basics

  • python >= 3.8
  • MPI (OpenMPI, MPICH, Intel MPI, etc.)
  • mpi4py >= 3.0.0
  • pytorch >= 1.11.0

Parallel I/O

  • h5py
  • netCDF4

GPU support

In order to do computations on your GPU(s):

  • your CUDA or ROCm installation must match your hardware and its drivers;
  • your PyTorch installation must be compiled with CUDA/ROCm support.

HPC systems

On most HPC-systems you will not be able to install/compile MPI or CUDA/ROCm yourself. Instead, you will most likely need to load a pre-installed MPI and/or CUDA/ROCm module from the module system. Maybe, you will even find PyTorch, h5py, or mpi4py as (part of) such a module. Note that for optimal performance on GPU, you need to usa an MPI library that has been compiled with CUDA/ROCm support (e.g., so-called "CUDA-aware MPI").

pip

Install the latest version with

pip install heat[hdf5,netcdf]

where the part in brackets is a list of optional dependencies. You can omit it, if you do not need HDF5 or NetCDF support.

conda

The conda build includes all dependencies including OpenMPI.

 conda install -c conda-forge heat

Support Channels

Go ahead and ask questions on GitHub Discussions. If you found a bug or are missing a feature, then please file a new issue. You can also get in touch with us on Mattermost (sign up with your GitHub credentials). Once you log in, you can introduce yourself on the Town Square channel.

Contribution guidelines

We welcome contributions from the community, if you want to contribute to Heat, be sure to review the Contribution Guidelines and Resources before getting started!

We use GitHub issues for tracking requests and bugs, please see Discussions for general questions and discussion. You can also get in touch with us on Mattermost (sign up with your GitHub credentials). Once you log in, you can introduce yourself on the Town Square channel.

If you’re unsure where to start or how your skills fit in, reach out! You can ask us here on GitHub, by leaving a comment on a relevant issue that is already open.

If you are new to contributing to open source, this guide helps explain why, what, and how to get involved.

Resources

Parallel Computing and MPI:

mpi4py

License

Heat is distributed under the MIT license, see our LICENSE file.

Citing Heat

Please do mention Heat in your publications if it helped your research. You can cite:

  • Götz, M., Debus, C., Coquelin, D., Krajsek, K., Comito, C., Knechtges, P., Hagemeier, B., Tarnawa, M., Hanselmann, S., Siggel, S., Basermann, A. & Streit, A. (2020). HeAT - a Distributed and GPU-accelerated Tensor Framework for Data Analytics. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 276-287). IEEE, DOI: 10.1109/BigData50022.2020.9378050.
@inproceedings{heat2020,
    title={{HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics}},
    author={
      Markus Götz and
      Charlotte Debus and
      Daniel Coquelin and
      Kai Krajsek and
      Claudia Comito and
      Philipp Knechtges and
      Björn Hagemeier and
      Michael Tarnawa and
      Simon Hanselmann and
      Martin Siggel and
      Achim Basermann and
      Achim Streit
    },
    booktitle={2020 IEEE International Conference on Big Data (Big Data)},
    year={2020},
    pages={276-287},
    month={December},
    publisher={IEEE},
    doi={10.1109/BigData50022.2020.9378050}
}

FAQ

Work in progress...

Acknowledgements

This work is supported by the Helmholtz Association Initiative and Networking Fund under project number ZT-I-0003 and the Helmholtz AI platform grant.

This project has received funding from Google Summer of Code (GSoC) in 2022.

This work is partially carried out under a programme of, and funded by, the European Space Agency. Any view expressed in this repository or related publications can in no way be taken to reflect the official opinion of the European Space Agency.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

heat-1.5.0.tar.gz (326.6 kB view details)

Uploaded Source

Built Distribution

heat-1.5.0-py3-none-any.whl (357.9 kB view details)

Uploaded Python 3

File details

Details for the file heat-1.5.0.tar.gz.

File metadata

  • Download URL: heat-1.5.0.tar.gz
  • Upload date:
  • Size: 326.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.3

File hashes

Hashes for heat-1.5.0.tar.gz
Algorithm Hash digest
SHA256 a2e2d7f0c1f340ab2597f2b9c02563f0057419a53287fbf4cdf1a7934bc6d60b
MD5 b77ad9463fde33145f0688ae551e947f
BLAKE2b-256 021bacf5373230767c80e37340bbce1434a91dc68a2900749fed5241cdeb52db

See more details on using hashes here.

File details

Details for the file heat-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: heat-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 357.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.3

File hashes

Hashes for heat-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 05531712acc1860f5b65a64d19311f8f3d5e558af77a938999675c0ccde9ce18
MD5 f542c27e156e10a3a35b83e4cb72a225
BLAKE2b-256 e35b0c23f478070146f52c1713983677647c4035cd5a1fdf702fe2bd4409d91e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page