Skip to main content

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Project description

PyTorch Logo


PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

Our trunk health (Continuous Integration signals) can be found at hud.pytorch.org.

More About PyTorch

Learn the basics of PyTorch

At a granular level, PyTorch is a library that consists of the following components:

Component Description
torch A Tensor library like NumPy, with strong GPU support
torch.autograd A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.jit A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code
torch.nn A neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training
torch.utils DataLoader and other utility functions for convenience

Usually, PyTorch is used either as:

  • A replacement for NumPy to use the power of GPUs.
  • A deep learning research platform that provides maximum flexibility and speed.

Elaborating Further:

A GPU-Ready Tensor Library

If you use NumPy, then you have used Tensors (a.k.a. ndarray).

Tensor illustration

PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the computation by a huge amount.

We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, mathematical operations, linear algebra, reductions. And they are fast!

Dynamic Neural Networks: Tape-Based Autograd

PyTorch has a unique way of building neural networks: using and replaying a tape recorder.

Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.

With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as torch-autograd, autograd, Chainer, etc.

While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research.

Dynamic graph

Python First

PyTorch is not a Python binding into a monolithic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as Cython and Numba. Our goal is to not reinvent the wheel where appropriate.

Imperative Experiences

PyTorch is designed to be intuitive, linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. The stack trace points to exactly where your code was defined. We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.

Fast and Lean

PyTorch has minimal framework overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends are mature and have been tested for years.

Hence, PyTorch is quite fast — whether you run small or large neural networks.

The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We've written custom memory allocators for the GPU to make sure that your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before.

Extensions Without Pain

Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward and with minimal abstractions.

You can write new neural network layers in Python using the torch API or your favorite NumPy-based libraries such as SciPy.

If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate. No wrapper code needs to be written. You can see a tutorial here and an example here.

Installation

Binaries

Commands to install binaries via Conda or pip wheels are on our website: https://pytorch.org/get-started/locally/

NVIDIA Jetson Platforms

Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided here and the L4T container is published here

They require JetPack 4.2 and above, and @dusty-nv and @ptrblck are maintaining them.

From Source

Prerequisites

If you are installing from source, you will need:

  • Python 3.10 or later
  • A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required, on Linux)
  • Visual Studio or Visual Studio Build Tool (Windows only)

* PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise, Professional, or Community Editions. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools do not come with Visual Studio Code by default.

An example of environment setup is shown below:

  • Linux:
$ source <CONDA_INSTALL_DIR>/bin/activate
$ conda create -y -n <CONDA_NAME>
$ conda activate <CONDA_NAME>
  • Windows:
$ source <CONDA_INSTALL_DIR>\Scripts\activate.bat
$ conda create -y -n <CONDA_NAME>
$ conda activate <CONDA_NAME>
$ call "C:\Program Files\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvarsall.bat" x64

A conda environment is not required. You can also do a PyTorch build in a standard virtual environment, e.g., created with tools like uv, provided your system has installed all the necessary dependencies unavailable as pip packages (e.g., CUDA, MKL.)

NVIDIA CUDA Support

If you want to compile with CUDA support, select a supported version of CUDA from our support matrix, then install the following:

Note: You could refer to the cuDNN Support Matrix for cuDNN versions with the various supported CUDA, CUDA driver, and NVIDIA hardware.

If you want to disable CUDA support, export the environment variable USE_CUDA=0. Other potentially useful environment variables may be found in setup.py. If CUDA is installed in a non-standard location, set PATH so that the nvcc you want to use can be found (e.g., export PATH=/usr/local/cuda-12.8/bin:$PATH).

If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here

AMD ROCm Support

If you want to compile with ROCm support, install

  • AMD ROCm 4.0 and above installation
  • ROCm is currently supported only for Linux systems.

By default the build system expects ROCm to be installed in /opt/rocm. If ROCm is installed in a different directory, the ROCM_PATH environment variable must be set to the ROCm installation directory. The build system automatically detects the AMD GPU architecture. Optionally, the AMD GPU architecture can be explicitly set with the PYTORCH_ROCM_ARCH environment variable AMD GPU architecture

If you want to disable ROCm support, export the environment variable USE_ROCM=0. Other potentially useful environment variables may be found in setup.py.

Intel GPU Support

If you want to compile with Intel GPU support, follow these

If you want to disable Intel GPU support, export the environment variable USE_XPU=0. Other potentially useful environment variables may be found in setup.py.

Get the PyTorch Source

git clone https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

Install Dependencies

Common

# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section above
pip install --group dev

On Linux

pip install mkl-static mkl-include
# CUDA only: Add LAPACK support for the GPU if needed
# magma installation: run with active conda environment. specify CUDA version to install
.ci/docker/common/install_magma_conda.sh 12.4

# (optional) If using torch.compile with inductor/triton, install the matching version of triton
# Run from the pytorch directory after cloning
# For Intel GPU support, please explicitly `export USE_XPU=1` before running command.
make triton

On MacOS

# Add this package on intel x86 processor machines only
pip install mkl-static mkl-include
# Add these packages if torch.distributed is needed
conda install pkg-config libuv

On Windows

pip install mkl-static mkl-include
# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.51

Install PyTorch

On Linux

If you're compiling for AMD ROCm then first run this command:

# Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py

Install PyTorch

# the CMake prefix for conda environment
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
python -m pip install --no-build-isolation -v -e .

# the CMake prefix for non-conda environment, e.g. Python venv
# call following after activating the venv
export CMAKE_PREFIX_PATH="${VIRTUAL_ENV}:${CMAKE_PREFIX_PATH}"

On macOS

python -m pip install --no-build-isolation -v -e .

On Windows

If you want to build legacy python code, please refer to Building on legacy code and CUDA

CPU-only builds

In this mode PyTorch computations will run on your CPU, not your GPU.

python -m pip install --no-build-isolation -v -e .

Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking CMAKE_INCLUDE_PATH and LIB. The instruction here is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.

CUDA based build

In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching

NVTX is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio.

Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.

Additional libraries such as Magma, oneDNN, a.k.a. MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.

You can refer to the build_pytorch.bat script for some other environment variables configurations

cmd

:: Set the environment variables after you have downloaded and unzipped the mkl package,
:: else CMake would throw an error as `Could NOT find OpenMP`.
set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
set LIB={Your directory}\mkl\lib;%LIB%

:: Read the content in the previous section carefully before you proceed.
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
set DISTUTILS_USE_SDK=1
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%

:: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe

python -m pip install --no-build-isolation -v -e .

Intel GPU builds

In this mode PyTorch with Intel GPU support will be built.

Please make sure the common prerequisites as well as the prerequisites for Intel GPU are properly installed and the environment variables are configured prior to starting the build. For build tool support, Visual Studio 2022 is required.

Then PyTorch can be built with the command:

:: CMD Commands:
:: Set the CMAKE_PREFIX_PATH to help find corresponding packages
:: %CONDA_PREFIX% only works after `conda activate custom_env`

if defined CMAKE_PREFIX_PATH (
    set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library;%CMAKE_PREFIX_PATH%"
) else (
    set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library"
)

python -m pip install --no-build-isolation -v -e .
Adjust Build Options (Optional)

You can adjust the configuration of cmake variables optionally (without building first), by doing the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done with such a step.

On Linux

export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
CMAKE_ONLY=1 python setup.py build
ccmake build  # or cmake-gui build

On macOS

export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
MACOSX_DEPLOYMENT_TARGET=11.0 CMAKE_ONLY=1 python setup.py build
ccmake build  # or cmake-gui build

Docker Image

Using pre-built images

You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+

docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.

Building the image yourself

NOTE: Must be built with a docker version > 18.06

The Dockerfile is supplied to build images with CUDA 11.1 support and cuDNN v8. You can pass PYTHON_VERSION=x.y make variable to specify which Python version is to be used by Miniconda, or leave it unset to use the default.

make -f docker.Makefile
# images are tagged as docker.io/${your_docker_username}/pytorch

You can also pass the CMAKE_VARS="..." environment variable to specify additional CMake variables to be passed to CMake during the build. See setup.py for the list of available variables.

make -f docker.Makefile

Building the Documentation

To build documentation in various formats, you will need Sphinx and the pytorch_sphinx_theme2.

Before you build the documentation locally, ensure torch is installed in your environment. For small fixes, you can install the nightly version as described in Getting Started.

For more complex fixes, such as adding a new module and docstrings for the new module, you might need to install torch from source. See Docstring Guidelines for docstring conventions.

cd docs/
pip install -r requirements.txt
make html
make serve

Run make to get a list of all available output formats.

If you get a katex error run npm install katex. If it persists, try npm install -g katex

[!NOTE] If you installed nodejs with a different package manager (e.g., conda) then npm will probably install a version of katex that is not compatible with your version of nodejs and doc builds will fail. A combination of versions that is known to work is node@6.13.1 and katex@0.13.18. To install the latter with npm you can run npm install -g katex@0.13.18

[!NOTE] If you see a numpy incompatibility error, run:

pip install 'numpy<2'

When you make changes to the dependencies run by CI, edit the .ci/docker/requirements-docs.txt file.

Building a PDF

To compile a PDF of all PyTorch documentation, ensure you have texlive and LaTeX installed. On macOS, you can install them using:

brew install --cask mactex

To create the PDF:

  1. Run:

    make latexpdf
    

    This will generate the necessary files in the build/latex directory.

  2. Navigate to this directory and execute:

    make LATEXOPTS="-interaction=nonstopmode"
    

    This will produce a pytorch.pdf with the desired content. Run this command one more time so that it generates the correct table of contents and index.

[!NOTE] To view the Table of Contents, switch to the Table of Contents view in your PDF viewer.

Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found on our website.

Getting Started

Three pointers to get you started:

Resources

Communication

Releases and Contributing

Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by filing an issue.

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.

To learn more about making a contribution to Pytorch, please see our Contribution page. For more information about PyTorch releases, see Release page.

The Team

PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.

PyTorch is currently maintained by Soumith Chintala, Gregory Chanan, Dmytro Dzhulgakov, Edward Yang, Alban Desmaison, Piotr Bialecki and Nikita Shulga with major contributions coming from hundreds of talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.

Note: This project is unrelated to hughperkins/pytorch with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.

License

PyTorch has a BSD-style license, as found in the LICENSE file.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

torch-2.10.0-cp314-cp314t-win_amd64.whl (114.0 MB view details)

Uploaded CPython 3.14tWindows x86-64

torch-2.10.0-cp314-cp314t-manylinux_2_28_x86_64.whl (915.6 MB view details)

Uploaded CPython 3.14tmanylinux: glibc 2.28+ x86-64

torch-2.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.14tmanylinux: glibc 2.28+ ARM64

torch-2.10.0-cp314-cp314t-macosx_14_0_arm64.whl (79.9 MB view details)

Uploaded CPython 3.14tmacOS 14.0+ ARM64

torch-2.10.0-cp314-cp314-win_amd64.whl (113.8 MB view details)

Uploaded CPython 3.14Windows x86-64

torch-2.10.0-cp314-cp314-manylinux_2_28_x86_64.whl (915.7 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ x86-64

torch-2.10.0-cp314-cp314-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.14manylinux: glibc 2.28+ ARM64

torch-2.10.0-cp314-cp314-macosx_14_0_arm64.whl (79.5 MB view details)

Uploaded CPython 3.14macOS 14.0+ ARM64

torch-2.10.0-cp313-none-macosx_11_0_arm64.whl (79.5 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

torch-2.10.0-cp313-cp313t-win_amd64.whl (114.0 MB view details)

Uploaded CPython 3.13tWindows x86-64

torch-2.10.0-cp313-cp313t-manylinux_2_28_x86_64.whl (915.6 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.28+ x86-64

torch-2.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.13tmanylinux: glibc 2.28+ ARM64

torch-2.10.0-cp313-cp313t-macosx_14_0_arm64.whl (79.9 MB view details)

Uploaded CPython 3.13tmacOS 14.0+ ARM64

torch-2.10.0-cp313-cp313-win_amd64.whl (113.8 MB view details)

Uploaded CPython 3.13Windows x86-64

torch-2.10.0-cp313-cp313-manylinux_2_28_x86_64.whl (915.7 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

torch-2.10.0-cp313-cp313-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ ARM64

torch-2.10.0-cp312-none-macosx_11_0_arm64.whl (79.5 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

torch-2.10.0-cp312-cp312-win_amd64.whl (113.8 MB view details)

Uploaded CPython 3.12Windows x86-64

torch-2.10.0-cp312-cp312-manylinux_2_28_x86_64.whl (915.7 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

torch-2.10.0-cp312-cp312-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ ARM64

torch-2.10.0-cp311-none-macosx_11_0_arm64.whl (79.4 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

torch-2.10.0-cp311-cp311-win_amd64.whl (113.7 MB view details)

Uploaded CPython 3.11Windows x86-64

torch-2.10.0-cp311-cp311-manylinux_2_28_x86_64.whl (915.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

torch-2.10.0-cp311-cp311-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ ARM64

torch-2.10.0-cp310-none-macosx_11_0_arm64.whl (79.4 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

torch-2.10.0-cp310-cp310-win_amd64.whl (113.7 MB view details)

Uploaded CPython 3.10Windows x86-64

torch-2.10.0-cp310-cp310-manylinux_2_28_x86_64.whl (915.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

torch-2.10.0-cp310-cp310-manylinux_2_28_aarch64.whl (146.0 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ ARM64

torch-2.10.0-1-cp313-none-macosx_11_0_arm64.whl (79.5 MB view details)

Uploaded CPython 3.13macOS 11.0+ ARM64

torch-2.10.0-1-cp312-none-macosx_11_0_arm64.whl (79.5 MB view details)

Uploaded CPython 3.12macOS 11.0+ ARM64

torch-2.10.0-1-cp311-none-macosx_11_0_arm64.whl (79.4 MB view details)

Uploaded CPython 3.11macOS 11.0+ ARM64

torch-2.10.0-1-cp310-none-macosx_11_0_arm64.whl (79.4 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

File details

Details for the file torch-2.10.0-cp314-cp314t-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp314-cp314t-win_amd64.whl
  • Upload date:
  • Size: 114.0 MB
  • Tags: CPython 3.14t, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp314-cp314t-win_amd64.whl
Algorithm Hash digest
SHA256 71283a373f0ee2c89e0f0d5f446039bdabe8dbc3c9ccf35f0f784908b0acd185
MD5 2b64c885c686174b4d607081d0dc1643
BLAKE2b-256 664d35352043ee0eaffdeff154fad67cd4a31dbed7ff8e3be1cc4549717d6d51

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314t-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp314-cp314t-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 d8f5912ba938233f86361e891789595ff35ca4b4e2ac8fe3670895e5976731d6
MD5 7fd7e406594de47fd550d3ff606c44d5
BLAKE2b-256 1db95f6f9d9e859fc3235f60578fa64f52c9c6e9b4327f0fe0defb6de5c0de31

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp314-cp314t-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 716b01a176c2a5659c98f6b01bf868244abdd896526f1c692712ab36dbaf9b63
MD5 31bf79cbe868d4c8f4733a44dae9092f
BLAKE2b-256 e2651a05346b418ea8ccd10360eef4b3e0ce688fba544e76edec26913a8d0ee0

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314t-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp314-cp314t-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 f5ab4ba32383061be0fb74bda772d470140a12c1c3b58a0cfbf3dae94d164c28
MD5 752160bb9e7e36089d72c1df17ab4a6a
BLAKE2b-256 d89471994e7d0d5238393df9732fdab607e37e2b56d26a746cb59fdb415f8966

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp314-cp314-win_amd64.whl
  • Upload date:
  • Size: 113.8 MB
  • Tags: CPython 3.14, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp314-cp314-win_amd64.whl
Algorithm Hash digest
SHA256 6528f13d2a8593a1a412ea07a99812495bec07e9224c28b2a25c0a30c7da025c
MD5 814fad265b4c2b52d4e2f5459aa48aa9
BLAKE2b-256 5697078a007208f8056d88ae43198833469e61a0a355abc0b070edd2c085eb9a

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp314-cp314-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 682497e16bdfa6efeec8cde66531bc8d1fbbbb4d8788ec6173c089ed3cc2bfe5
MD5 a84edd262d5ed1d638c71a5756b5be8e
BLAKE2b-256 5ecd4b95ef7f293b927c283db0b136c42be91c8ec6845c44de0238c8c23bdc80

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp314-cp314-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 233aed0659a2503b831d8a67e9da66a62c996204c0bba4f4c442ccc0c68a3f60
MD5 4f70ae5de0357d73469a11ca45f46420
BLAKE2b-256 692b51e663ff190c9d16d4a8271203b71bc73a16aa7619b9f271a69b9d4a936b

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp314-cp314-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp314-cp314-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 bf0d9ff448b0218e0433aeb198805192346c4fd659c852370d5cc245f602a06a
MD5 958a91072e6e43fd4f91b8477c87c959
BLAKE2b-256 4f93716b5ac0155f1be70ed81bacc21269c3ece8dba0c249b9994094110bfc51

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp313-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 cdf2a523d699b70d613243211ecaac14fe9c5df8a0b0a9c02add60fb2a413e0f
MD5 b6871d043975b78ae2410d72bb027b02
BLAKE2b-256 0e13e76b4d9c160e89fff48bf16b449ea324bda84745d2ab30294c37c2434c0d

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313t-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp313-cp313t-win_amd64.whl
  • Upload date:
  • Size: 114.0 MB
  • Tags: CPython 3.13t, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp313-cp313t-win_amd64.whl
Algorithm Hash digest
SHA256 ff43db38af76fda183156153983c9a096fc4c78d0cd1e07b14a2314c7f01c2c8
MD5 1d146378e2e98479b4be630fa66b0f3e
BLAKE2b-256 36530197f868c75f1050b199fe58f9bf3bf3aecac9b4e85cc9c964383d745403

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313t-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp313-cp313t-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 6021db85958db2f07ec94e1bc77212721ba4920c12a18dc552d2ae36a3eb163f
MD5 7c7d15ae163bcd5cc4c0121061e17256
BLAKE2b-256 54fdb207d1c525cb570ef47f3e9f836b154685011fce11a2f444ba8a4084d042

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp313-cp313t-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 aae1b29cd68e50a9397f5ee897b9c24742e9e306f88a807a27d617f07adb3bd8
MD5 20270594720b6ff296d7b5c4416d4795
BLAKE2b-256 d81421fbce63bc452381ba5f74a2c0a959fdf5ad5803ccc0c654e752e0dbe91a

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313t-macosx_14_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp313-cp313t-macosx_14_0_arm64.whl
Algorithm Hash digest
SHA256 3202429f58309b9fa96a614885eace4b7995729f44beb54d3e4a47773649d382
MD5 b665815d2f95262dc8b48b151c2e07e8
BLAKE2b-256 1a0b39929b148f4824bc3ad6f9f72a29d4ad865bcf7ebfc2fa67584773e083d2

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp313-cp313-win_amd64.whl
  • Upload date:
  • Size: 113.8 MB
  • Tags: CPython 3.13, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp313-cp313-win_amd64.whl
Algorithm Hash digest
SHA256 c2ee399c644dc92ef7bc0d4f7e74b5360c37cdbe7c5ba11318dda49ffac2bc57
MD5 78bd885223a723c754abc54c3bfd6344
BLAKE2b-256 6a16502fb1b41e6d868e8deb5b0e3ae926bbb36dab8ceb0d1b769b266ad7b0c3

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 6b71486353fce0f9714ca0c9ef1c850a2ae766b409808acd58e9678a3edb7738
MD5 47cf2916b9abc78562d86ebe03bf4bbc
BLAKE2b-256 98fb5160261aeb5e1ee12ee95fe599d0541f7c976c3701d607d8fc29e623229f

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp313-cp313-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp313-cp313-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5c4d217b14741e40776dd7074d9006fd28b8a97ef5654db959d8635b2fe5f29b
MD5 8e40aaf2bf2c602803692561fffa58cb
BLAKE2b-256 c96ff2e91e34e3fcba2e3fc8d8f74e7d6c22e74e480bbd1db7bc8900fdf3e95c

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp312-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp312-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6d3707a61863d1c4d6ebba7be4ca320f42b869ee657e9b2c21c736bf17000294
MD5 ca6997a7bdb0880f4b431c00931b6c12
BLAKE2b-256 c95cdee910b87c4d5c0fcb41b50839ae04df87c1cfc663cf1b5fca7ea565eeaa

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp312-cp312-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp312-cp312-win_amd64.whl
  • Upload date:
  • Size: 113.8 MB
  • Tags: CPython 3.12, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 2c66c61f44c5f903046cc696d088e21062644cbe541c7f1c4eaae88b2ad23547
MD5 b917f3e25ebd73098693299a3a5785d6
BLAKE2b-256 6e01624c4324ca01f66ae4c7cd1b74eb16fb52596dce66dbe51eff95ef9e7a4c

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 787124e7db3b379d4f1ed54dd12ae7c741c16a4d29b49c0226a89bea50923ffb
MD5 c9914eaf754352fb9813c983cddbfced
BLAKE2b-256 238e3c74db5e53bff7ed9e34c8123e6a8bfef718b2450c35eefab85bb4a7e270

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp312-cp312-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp312-cp312-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5fd4117d89ffd47e3dcc71e71a22efac24828ad781c7e46aaaf56bf7f2796acf
MD5 8847330583e358551be0655b7af7fc52
BLAKE2b-256 ccaf758e242e9102e9988969b5e621d41f36b8f258bb4a099109b7a4b4b50ea4

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp311-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp311-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b7bd80f3477b830dd166c707c5b0b82a898e7b16f59a7d9d42778dd058272e8b
MD5 30b0321d8f940f2911d14fa3817c5aa6
BLAKE2b-256 61d815b9d9d3a6b0c01b883787bd056acbe5cc321090d4b216d3ea89a8fcfdf3

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp311-cp311-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp311-cp311-win_amd64.whl
  • Upload date:
  • Size: 113.7 MB
  • Tags: CPython 3.11, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 29b7009dba4b7a1c960260fc8ac85022c784250af43af9fb0ebafc9883782ebd
MD5 6117a87df52efb3fbd13780c7c7d4b2e
BLAKE2b-256 6f3dc87b33c5f260a2a8ad68da7147e105f05868c281c63d65ed85aa4da98c66

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 a2f9edd8dbc99f62bc4dfb78af7bf89499bca3d753423ac1b4e06592e467b763
MD5 21353867c75a5c97d3004cda02dbf16f
BLAKE2b-256 ae30a3a2120621bf9c17779b169fc17e3dc29b230c29d0f8222f499f5e159aa8

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp311-cp311-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp311-cp311-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 3282d9febd1e4e476630a099692b44fdc214ee9bf8ee5377732d9d9dfe5712e4
MD5 bce2d85dd70286f4c6a9b5012c0fc35a
BLAKE2b-256 7889f5554b13ebd71e05c0b002f95148033e730d3f7067f67423026cc9c69410

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp310-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp310-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 35e407430795c8d3edb07a1d711c41cc1f9eaddc8b2f1cc0a165a6767a8fb73d
MD5 52ad9014df272ff769a1ecdce7d559bf
BLAKE2b-256 76bbd820f90e69cda6c8169b32a0c6a3ab7b17bf7990b8f2c680077c24a3c14c

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: torch-2.10.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 113.7 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for torch-2.10.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 a4be6a2a190b32ff5c8002a0977a25ea60e64f7ba46b1be37093c141d9c49aeb
MD5 44772a47aa21826bc6144f7a6218e6fb
BLAKE2b-256 40b866bbe96f0d79be2b5c697b2e0b187ed792a15c6c4b8904613454651db848

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 aaf663927bcd490ae971469a624c322202a2a1e68936eb952535ca4cd3b90444
MD5 68425261c99b7378e770d657d29d8a5a
BLAKE2b-256 b5606662535354191e2d1555296045b63e4279e5a9dbad49acf55a5d38655a39

See more details on using hashes here.

File details

Details for the file torch-2.10.0-cp310-cp310-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-cp310-cp310-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5276fa790a666ee8becaffff8acb711922252521b28fbce5db7db5cf9cb2026d
MD5 2986b63a081642b2bc6d362b36d592ab
BLAKE2b-256 0c1ac61f36cfd446170ec27b3a4984f072fd06dab6b5d7ce27e11adb35d6c838

See more details on using hashes here.

File details

Details for the file torch-2.10.0-1-cp313-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-1-cp313-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 7350f6652dfd761f11f9ecb590bfe95b573e2961f7a242eccb3c8e78348d26fe
MD5 3a73ab70ce6cceccc348f10051d30b47
BLAKE2b-256 a41baf5fccb50c341bd69dc016769503cb0857c1423fbe9343410dfeb65240f2

See more details on using hashes here.

File details

Details for the file torch-2.10.0-1-cp312-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-1-cp312-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6abb224c2b6e9e27b592a1c0015c33a504b00a0e0938f1499f7f514e9b7bfb5c
MD5 e49dc837a9db1eb283446e551e220abb
BLAKE2b-256 c92f0b295dd8d199ef71e6f176f576473d645d41357b7b8aa978cc6b042575df

See more details on using hashes here.

File details

Details for the file torch-2.10.0-1-cp311-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-1-cp311-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 f699f31a236a677b3118bc0a3ef3d89c0c29b5ec0b20f4c4bf0b110378487464
MD5 97f5ced4c6281572b17b4fce0528469c
BLAKE2b-256 25d89e6b8e7df981a1e3ea3907fd5a74673e791da483e8c307f0b6ff012626d0

See more details on using hashes here.

File details

Details for the file torch-2.10.0-1-cp310-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for torch-2.10.0-1-cp310-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 c37fc46eedd9175f9c81814cc47308f1b42cfe4987e532d4b423d23852f2bf63
MD5 32c020a915519f48ff44238d2b84ecd5
BLAKE2b-256 e3ea304cf7afb744aa626fa9855245526484ee55aba610d9973a0521c552a843

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page