Skip to main content

NVIDIA cuTENSOR

Project description

cuTENSOR is a high-performance CUDA library for tensor primitives.

Key Features

  • Extensive mixed-precision support:

    • FP64 inputs with FP32 compute.

    • FP32 inputs with FP16, BF16, or TF32 compute.

    • Complex-times-real operations.

    • Conjugate (without transpose) support.

  • Support for up to 64-dimensional tensors.

  • Arbitrary data layouts.

  • Trivially serializable data structures.

  • Main computational routines:

    • Direct (i.e., transpose-free) tensor contractions.

      • Support just-in-time compilation of dedicated kernels.

    • Tensor reductions (including partial reductions).

    • Element-wise tensor operations:

      • Support for various activation functions.

      • Support for padding of the output tensor

      • Arbitrary tensor permutations.

      • Conversion between different data types.

Documentation

Please refer to https://docs.nvidia.com/cuda/cutensor/index.html for the cuTENSOR documentation.

Installation

The cuTENSOR wheel can be installed as follows:

pip install cutensor-cuXX

where XX is the CUDA major version (currently CUDA 11 & 12 are supported). The package cutensor (without the -cuXX suffix) is deprecated. If you have cutensor installed, please remove it prior to installing cutensor-cuXX.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

cutensor_cu12-2.0.2-py3-none-win_amd64.whl (145.0 MB view hashes)

Uploaded Python 3 Windows x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page