NVIDIA cuTENSOR
Reason this release was yanked:
placeholder
Project description
cuTENSOR is a high-performance CUDA library for tensor primitives.
Key Features
Extensive mixed-precision support:
FP64 inputs with FP32 compute.
FP32 inputs with FP16, BF16, or TF32 compute.
Complex-times-real operations.
Conjugate (without transpose) support.
Support for up to 64-dimensional tensors.
Arbitrary data layouts.
Trivially serializable data structures.
Main computational routines:
Direct (i.e., transpose-free) tensor contractions.
Tensor reductions (including partial reductions).
Element-wise tensor operations:
Support for various activation functions.
Arbitrary tensor permutations.
Conversion between different data types.
Documentation
Please refer to https://docs.nvidia.com/cuda/cutensor/index.html for the cuTENSOR documentation.
Installation
The cuTENSOR wheel can be installed as follows:
pip install cutensor-cuXX
where XX is the CUDA major version (currently CUDA 11 is supported). The package cutensor (without the -cuXX suffix) is considered deprecated.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for cutensor_cu12-0.0.1.dev0-py3-none-manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e4ef8045224d3761a1299dbab07ff1adb27932ad74fc0b2b568eb45d9320741f |
|
MD5 | 0eb5ee74dc304476633ca0efc35ed3a1 |
|
BLAKE2b-256 | 62b038f57e8676ba1394206d8be35d98f3d812ae506523009ee1b391e37a2628 |