Skip to main content

A DirectML backend for hardware acceleration in PyTorch.

Project description

PyTorch with DirectML

DirectML acceleration for PyTorch is currently available for Public Preview. PyTorch with DirectML enables training and inference of complex machine learning models on a wide range of DirectX 12-compatible hardware.

DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.

More information about DirectML can be found in Introduction to DirectML.

PyTorch on DirectML is supported on both the latest versions of Windows 10 and the Windows Subsystem for Linux, and is available for download as a PyPI package. For more information about getting started, see GPU accelerated ML training (docs.microsoft.com)

Samples

Refer to the Pytorch with DirectML Samples Repo for samples.

Feedback

We look forward to hearing from you!

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Workflow

Developers will work on release branches daily and RI to main branch as needed.

Motivations why in this pattern.

  1. Avoid pushing identical changes twice into release and main.
  2. Avoid to be blocked by broken changes from pytorch master.

Examples of cutting branches:

  1. A new branch for pytorch-1.13 (e.g. [release/1.13+2]) will be cut from the last released branch release/1.13+ for new releases, and developers will stay on the new branch daily until that release is published.

  2. For new pytorch version 1.13.1, release/1.13.1 should be cut from main after all release workloads are RIed.

workflow

External Links

PyTorch homepage

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

experimental_torch_directml-0.1.13.dev23122022014639-cp38-cp38-win_amd64.whl (19.9 MB view details)

Uploaded CPython 3.8 Windows x86-64

File details

Details for the file experimental_torch_directml-0.1.13.dev23122022014639-cp38-cp38-win_amd64.whl.

File metadata

File hashes

Hashes for experimental_torch_directml-0.1.13.dev23122022014639-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 5c62c6dd31772ef5f5ddd0113b59b7974953ed7cc98b9d05dda7d6eeb9b619c3
MD5 2199525280746b21401322bca460d4ae
BLAKE2b-256 48d4c3ed2ccbcc6db4ffc421d2d22c9df754607755637d0374d83b574a0dea4a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page