A DirectML backend for hardware acceleration in PyTorch.
Project description
PyTorch with DirectML
DirectML acceleration for PyTorch is currently available for Public Preview. PyTorch with DirectML enables training and inference of complex machine learning models on a wide range of DirectX 12-compatible hardware.
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers, including all DirectX 12-capable GPUs from vendors such as AMD, Intel, NVIDIA, and Qualcomm.
More information about DirectML can be found in Introduction to DirectML.
PyTorch on DirectML is supported on both the latest versions of Windows 10 and the Windows Subsystem for Linux, and is available for download as a PyPI package. For more information about getting started, see GPU accelerated ML training (docs.microsoft.com)
Samples
Refer to the Pytorch with DirectML Samples Repo for samples.
Feedback
We look forward to hearing from you!
-
For TensorFlow with DirectML issues, bugs, and feedback; or for general DirectML issues and feedback, please file an issue or contact us directly at askdirectml@microsoft.com.
-
For PyTorch with DirectML issues, bugs, and feedback; or for general DirectML issues and feedback, please file an issue or contact us directly at askdirectml@microsoft.com.
-
For Windows ML issues, please file a GitHub issue at microsoft/Windows-Machine-Learning or contact us directly at askwindowsml@microsoft.com.
-
For ONNX Runtime issues, please file an issue at microsoft/onnxruntime.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Workflow
Developers will work on release branches daily and RI to main branch as needed.
- main branch builds with the nightly-built pytorch
conda install pytorch torchvision cpuonly -c pytorch-nightly
- release branches build with the corresponding published pytorch
- both release/1.13 and release/1.13+ build with pytorch 1.13.
- coming release/1.13.1 should build with pytorch 1.13.1
Motivations why in this pattern.
- Avoid pushing identical changes twice into release and main.
- Avoid to be blocked by broken changes from pytorch master.
Examples of cutting branches:
-
A new branch for pytorch-1.13 (e.g. [release/1.13+2]) will be cut from the last released branch release/1.13+ for new releases, and developers will stay on the new branch daily until that release is published.
-
For new pytorch version 1.13.1, release/1.13.1 should be cut from main after all release workloads are RIed.
External Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for experimental_torch_directml-0.1.13.dev2012023133414-cp39-cp39-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a00dfd5952b4461eeff47144db0c761770e55f93200f890d5d6ac0ec6797b9dd |
|
MD5 | 70e78959e3cf651dbc0ded1c22ef28c7 |
|
BLAKE2b-256 | 0d5832708084cb694911710c6df0271721a7bdef7c16495aa4245a51b10cd20d |