Skip to main content

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Project description

PyTorch Logo


PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

Our trunk health (Continuous Integration signals) can be found at hud.pytorch.org.

More About PyTorch

Learn the basics of PyTorch

At a granular level, PyTorch is a library that consists of the following components:

Component Description
torch A Tensor library like NumPy, with strong GPU support
torch.autograd A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.jit A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code
torch.nn A neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training
torch.utils DataLoader and other utility functions for convenience

Usually, PyTorch is used either as:

  • A replacement for NumPy to use the power of GPUs.
  • A deep learning research platform that provides maximum flexibility and speed.

Elaborating Further:

A GPU-Ready Tensor Library

If you use NumPy, then you have used Tensors (a.k.a. ndarray).

Tensor illustration

PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the computation by a huge amount.

We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, mathematical operations, linear algebra, reductions. And they are fast!

Dynamic Neural Networks: Tape-Based Autograd

PyTorch has a unique way of building neural networks: using and replaying a tape recorder.

Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.

With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as torch-autograd, autograd, Chainer, etc.

While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research.

Dynamic graph

Python First

PyTorch is not a Python binding into a monolithic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as Cython and Numba. Our goal is to not reinvent the wheel where appropriate.

Imperative Experiences

PyTorch is designed to be intuitive, linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. The stack trace points to exactly where your code was defined. We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.

Fast and Lean

PyTorch has minimal framework overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends are mature and have been tested for years.

Hence, PyTorch is quite fast — whether you run small or large neural networks.

The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We've written custom memory allocators for the GPU to make sure that your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before.

Extensions Without Pain

Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward and with minimal abstractions.

You can write new neural network layers in Python using the torch API or your favorite NumPy-based libraries such as SciPy.

If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate. No wrapper code needs to be written. You can see a tutorial here and an example here.

Installation

Binaries

Commands to install binaries via Conda or pip wheels are on our website: https://pytorch.org/get-started/locally/

NVIDIA Jetson Platforms

Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided here and the L4T container is published here

They require JetPack 4.2 and above, and @dusty-nv and @ptrblck are maintaining them.

From Source

Prerequisites

If you are installing from source, you will need:

  • Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
  • A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required)

We highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.

NVIDIA CUDA Support

If you want to compile with CUDA support, select a supported version of CUDA from our support matrix, then install the following:

Note: You could refer to the cuDNN Support Matrix for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware

If you want to disable CUDA support, export the environment variable USE_CUDA=0. Other potentially useful environment variables may be found in setup.py.

If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here

AMD ROCm Support

If you want to compile with ROCm support, install

  • AMD ROCm 4.0 and above installation
  • ROCm is currently supported only for Linux systems.

If you want to disable ROCm support, export the environment variable USE_ROCM=0. Other potentially useful environment variables may be found in setup.py.

Intel GPU Support

If you want to compile with Intel GPU support, follow these

If you want to disable Intel GPU support, export the environment variable USE_XPU=0. Other potentially useful environment variables may be found in setup.py.

Install Dependencies

Common

conda install cmake ninja
# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
pip install -r requirements.txt

On Linux

conda install intel::mkl-static intel::mkl-include
# CUDA only: Add LAPACK support for the GPU if needed
conda install -c pytorch magma-cuda121  # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo

# (optional) If using torch.compile with inductor/triton, install the matching version of triton
# Run from the pytorch directory after cloning
# For Intel GPU support, please explicitly `export USE_XPU=1` before running command.
make triton

On MacOS

# Add this package on intel x86 processor machines only
conda install intel::mkl-static intel::mkl-include
# Add these packages if torch.distributed is needed
conda install pkg-config libuv

On Windows

conda install intel::mkl-static intel::mkl-include
# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39

Get the PyTorch Source

git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

Install PyTorch

On Linux

If you would like to compile PyTorch with new C++ ABI enabled, then first run this command:

export _GLIBCXX_USE_CXX11_ABI=1

If you're compiling for AMD ROCm then first run this command:

# Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py

Install PyTorch

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py develop

Aside: If you are using Anaconda, you may experience an error caused by the linker:

build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1

This is caused by ld from the Conda environment shadowing the system ld. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.

On macOS

python3 setup.py develop

On Windows

Choose Correct Visual Studio Version.

PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise, Professional, or Community Editions. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools do not come with Visual Studio Code by default.

If you want to build legacy python code, please refer to Building on legacy code and CUDA

CPU-only builds

In this mode PyTorch computations will run on your CPU, not your GPU

conda activate
python setup.py develop

Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking CMAKE_INCLUDE_PATH and LIB. The instruction here is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.

CUDA based build

In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching

NVTX is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio.

Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.

Additional libraries such as Magma, oneDNN, a.k.a. MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.

You can refer to the build_pytorch.bat script for some other environment variables configurations

cmd

:: Set the environment variables after you have downloaded and unzipped the mkl package,
:: else CMake would throw an error as `Could NOT find OpenMP`.
set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
set LIB={Your directory}\mkl\lib;%LIB%

:: Read the content in the previous section carefully before you proceed.
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
set DISTUTILS_USE_SDK=1
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%

:: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe

python setup.py develop
Adjust Build Options (Optional)

You can adjust the configuration of cmake variables optionally (without building first), by doing the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done with such a step.

On Linux

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py build --cmake-only
ccmake build  # or cmake-gui build

On macOS

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
ccmake build  # or cmake-gui build

Docker Image

Using pre-built images

You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+

docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.

Building the image yourself

NOTE: Must be built with a docker version > 18.06

The Dockerfile is supplied to build images with CUDA 11.1 support and cuDNN v8. You can pass PYTHON_VERSION=x.y make variable to specify which Python version is to be used by Miniconda, or leave it unset to use the default.

make -f docker.Makefile
# images are tagged as docker.io/${your_docker_username}/pytorch

You can also pass the CMAKE_VARS="..." environment variable to specify additional CMake variables to be passed to CMake during the build. See setup.py for the list of available variables.

make -f docker.Makefile

Building the Documentation

To build documentation in various formats, you will need Sphinx and the readthedocs theme.

cd docs/
pip install -r requirements.txt

You can then build the documentation by running make <format> from the docs/ folder. Run make to get a list of all available output formats.

If you get a katex error run npm install katex. If it persists, try npm install -g katex

Note: if you installed nodejs with a different package manager (e.g., conda) then npm will probably install a version of katex that is not compatible with your version of nodejs and doc builds will fail. A combination of versions that is known to work is node@6.13.1 and katex@0.13.18. To install the latter with npm you can run npm install -g katex@0.13.18

Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found on our website.

Getting Started

Three-pointers to get you started:

Resources

Communication

Releases and Contributing

Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by filing an issue.

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.

To learn more about making a contribution to Pytorch, please see our Contribution page. For more information about PyTorch releases, see Release page.

The Team

PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.

PyTorch is currently maintained by Soumith Chintala, Gregory Chanan, Dmytro Dzhulgakov, Edward Yang, and Nikita Shulga with major contributions coming from hundreds of talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.

Note: This project is unrelated to hughperkins/pytorch with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.

License

PyTorch has a BSD-style license, as found in the LICENSE file.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

torch-2.4.0-cp312-none-macosx_11_0_arm64.whl (62.1 MB view details)

Uploaded CPython 3.12 macOS 11.0+ ARM64

torch-2.4.0-cp312-cp312-win_amd64.whl (197.8 MB view details)

Uploaded CPython 3.12 Windows x86-64

torch-2.4.0-cp312-cp312-manylinux2014_aarch64.whl (89.7 MB view details)

Uploaded CPython 3.12

torch-2.4.0-cp312-cp312-manylinux1_x86_64.whl (797.2 MB view details)

Uploaded CPython 3.12

torch-2.4.0-cp311-none-macosx_11_0_arm64.whl (62.1 MB view details)

Uploaded CPython 3.11 macOS 11.0+ ARM64

torch-2.4.0-cp311-cp311-win_amd64.whl (197.9 MB view details)

Uploaded CPython 3.11 Windows x86-64

torch-2.4.0-cp311-cp311-manylinux2014_aarch64.whl (89.9 MB view details)

Uploaded CPython 3.11

torch-2.4.0-cp311-cp311-manylinux1_x86_64.whl (797.3 MB view details)

Uploaded CPython 3.11

torch-2.4.0-cp310-none-macosx_11_0_arm64.whl (62.1 MB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

torch-2.4.0-cp310-cp310-win_amd64.whl (197.9 MB view details)

Uploaded CPython 3.10 Windows x86-64

torch-2.4.0-cp310-cp310-manylinux2014_aarch64.whl (89.8 MB view details)

Uploaded CPython 3.10

torch-2.4.0-cp310-cp310-manylinux1_x86_64.whl (797.2 MB view details)

Uploaded CPython 3.10

torch-2.4.0-cp39-none-macosx_11_0_arm64.whl (62.1 MB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

torch-2.4.0-cp39-cp39-win_amd64.whl (198.0 MB view details)

Uploaded CPython 3.9 Windows x86-64

torch-2.4.0-cp39-cp39-manylinux2014_aarch64.whl (89.8 MB view details)

Uploaded CPython 3.9

torch-2.4.0-cp39-cp39-manylinux1_x86_64.whl (797.2 MB view details)

Uploaded CPython 3.9

torch-2.4.0-cp38-none-macosx_11_0_arm64.whl (62.1 MB view details)

Uploaded CPython 3.8 macOS 11.0+ ARM64

torch-2.4.0-cp38-cp38-win_amd64.whl (198.1 MB view details)

Uploaded CPython 3.8 Windows x86-64

torch-2.4.0-cp38-cp38-manylinux2014_aarch64.whl (89.8 MB view details)

Uploaded CPython 3.8

torch-2.4.0-cp38-cp38-manylinux1_x86_64.whl (797.2 MB view details)

Uploaded CPython 3.8

File details

Details for the file torch-2.4.0-cp312-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp312-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 91aaf00bfe1ffa44dc5b52809d9a95129fca10212eca3ac26420eb11727c6288
MD5 a79a8c9469d3b9fa54cfff62d6ee7e54
BLAKE2b-256 c787489ebb234e75760e06fa4789fa6d4e13c125beefa1483ce35c9e43dcd395

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: torch-2.4.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 197.8 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for torch-2.4.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 3374128bbf7e62cdaed6c237bfd39809fbcfaa576bee91e904706840c3f2195c
MD5 28bbddfbff60a89c39e4dfe7b193dee5
BLAKE2b-256 dc95a14dd84ce65e5ce176176393a80b2f74864ee134a31f590140456a4c0959

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp312-cp312-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp312-cp312-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 bc3988e8b36d1e8b998d143255d9408d8c75da4ab6dd0dcfd23b623dfb0f0f57
MD5 7af8d9be53832e02155fee3257f2c558
BLAKE2b-256 9a5d327fb72044c22d68a826643abf2e220db3d7f6005a41a6b167af1ffbc708

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp312-cp312-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp312-cp312-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 997084a0f9784d2a89095a6dc67c7925e21bf25dea0b3d069b41195016ccfcbb
MD5 df832ffc0ea99a3e71c24b2cb9cb0abe
BLAKE2b-256 bf55b6c74df4695f94a9c3505021bc2bd662e271d028d055b3b2529f3442a3bd

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp311-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp311-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f169b4ea6dc93b3a33319611fcc47dc1406e4dd539844dcbd2dec4c1b96e166d
MD5 5ead96a7affe8be25f1c4cdce10d2702
BLAKE2b-256 b7d05e8f96d83889e77b478b90e7d8d24a5fc14c5c9350c6b93d071f45f39096

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: torch-2.4.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 197.9 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for torch-2.4.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 97730014da4c57ffacb3c09298c6ce05400606e890bd7a05008d13dd086e46b1
MD5 9825a5307c42aae3f4a031208b3c1905
BLAKE2b-256 18cff69dff972a748e08e1bf602ef94ea5c6d4dd2f41cea22c8ad67a607d8b41

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp311-cp311-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp311-cp311-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 7334325c0292cbd5c2eac085f449bf57d3690932eac37027e193ba775703c9e6
MD5 0e2a5db9216da2e787762d35b7f7884d
BLAKE2b-256 84fa2b510a02809ddd70aed821bc2328c4effd206503df38a1328c9f1f957813

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp311-cp311-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp311-cp311-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 e743adadd8c8152bb8373543964551a7cb7cc20ba898dc8f9c0cdbe47c283de0
MD5 19fc78eed98d8db11a90e38ca57e170a
BLAKE2b-256 80839b7681e41e59adb6c2b042f7e8eb716515665a6eed3dda4215c6b3385b90

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp310-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp310-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 685418ab93730efbee71528821ff54005596970dd497bf03c89204fb7e3f71de
MD5 2cbbe39a365ad813d5136e257e6cf4a4
BLAKE2b-256 ff70feb6338f48615b5a5fe8ff218c15ae9897fa7c1c996dddf9867e8306a8cf

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: torch-2.4.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 197.9 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for torch-2.4.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 2497cbc7b3c951d69b276ca51fe01c2865db67040ac67f5fc20b03e41d16ea4a
MD5 3b36f5dcfe2940c6d938689b70870a8a
BLAKE2b-256 198e24221589eb2dc066b14e29800d2e801c446f697c2d2240a9a61c6c0c5101

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp310-cp310-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp310-cp310-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 c4ca297b7bd58b506bfd6e78ffd14eb97c0e7797dcd7965df62f50bb575d8954
MD5 71e9dc87046f4e91151728d87199feaa
BLAKE2b-256 817784a2cb46649f538ea9d317b7272476d295df9a0cfc92907145a854c8c67f

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp310-cp310-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp310-cp310-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 4ed94583e244af51d6a8d28701ca5a9e02d1219e782f5a01dd401f90af17d8ac
MD5 0f62bf733b5a57a80b5085887aeaddbd
BLAKE2b-256 9abd4161ae28fb1c388a8ee30ca3aa72cf11ac3016ce62bc9e82c71ce193c410

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp39-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp39-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 8940fc8b97a4c61fdb5d46a368f21f4a3a562a17879e932eb51a5ec62310cb31
MD5 7c98c23fa867eec8023cef34c42dd88a
BLAKE2b-256 3a4a7441a1ab49762309146ea31fab1e55362f13f7749c1aabff41d8edc887f9

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: torch-2.4.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 198.0 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for torch-2.4.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 a2feb98ac470109472fb10dfef38622a7ee08482a16c357863ebc7bc7db7c8f7
MD5 2c8fa1aad8367ae6c3bc6303e74da880
BLAKE2b-256 9fef13faff7ef5770cea29ef2c06e2b87d6f34697973aef5eea4234948b46c4d

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp39-cp39-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp39-cp39-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 ed765d232d23566052ba83632ec73a4fccde00b4c94ad45d63b471b09d63b7a7
MD5 d7cb5e09184abf0f6067660fada467d5
BLAKE2b-256 0538e4ad00f4e60c9010b981e1a94d58df4a96b9b10ba6ef585be6019f54b543

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp39-cp39-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp39-cp39-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 618808d3f610d5f180e47a697d4ec90b810953bb1e020f424b2ac7fb0884b545
MD5 06bbac5dee94af9c62d8abe3a76b5b74
BLAKE2b-256 36803ac18a2db50d832745c1c5db7e47c4d0e02f1a11e92185155a6b218cbbe3

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp38-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp38-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3af4de2a618fb065e78404c4ba27a818a7b7957eaeff28c6c66ce7fb504b68b8
MD5 3815fc7823ef35c8a00e17380f226554
BLAKE2b-256 f8d2f9ffcde96164c91f82e154809ad6238f1871f9f2393d56648009287c948e

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: torch-2.4.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 198.1 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.11.7

File hashes

Hashes for torch-2.4.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 688eec9240f3ce775f22e1e1a5ab9894f3d5fe60f3f586deb7dbd23a46a83916
MD5 282985c12cc18d20ad59e02a2ef00989
BLAKE2b-256 c6c2841f6e76cdcefcefbed5211824a04e1d7cdb5712a74fa6e8fdaca6cfeaf7

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp38-cp38-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp38-cp38-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 a046491aaf96d1215e65e1fa85911ef2ded6d49ea34c8df4d0638879f2402eef
MD5 499232e7160e571252c868e60c3506c1
BLAKE2b-256 d30801a64df7b0f57f81828b27c39696cf78cd8524af8f59420719201d95649a

See more details on using hashes here.

File details

Details for the file torch-2.4.0-cp38-cp38-manylinux1_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.4.0-cp38-cp38-manylinux1_x86_64.whl
Algorithm Hash digest
SHA256 cc30457ea5489c62747d3306438af00c606b509d78822a88f804202ba63111ed
MD5 608fc2147c41ebd2cc9f3b76fa79e3c6
BLAKE2b-256 fc58f93bdce23c9ff568c3dfb5129db0c14e60f7c72ab4d1a6de8fedca6e3792

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page