Skip to main content

First-class interop between PyTorch and MLIR

Project description

The Torch-MLIR Project

The Torch-MLIR project aims to provide first class compiler support from the PyTorch ecosystem to the MLIR ecosystem.

This project is participating in the LLVM Incubator process: as such, it is not part of any official LLVM release. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project is not yet endorsed as a component of LLVM.

PyTorch An open source machine learning framework that accelerates the path from research prototyping to production deployment.

MLIR The MLIR project is a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together.

Torch-MLIR Multiple Vendors use MLIR as the middle layer, mapping from platform frameworks like PyTorch, JAX, and TensorFlow into MLIR and then progressively lowering down to their target hardware. We have seen half a dozen custom lowerings from PyTorch to MLIR. Having canonical lowerings from the PyTorch ecosystem to the MLIR ecosystem would provide much needed relief to hardware vendors to focus on their unique value rather than implementing yet another PyTorch frontend for MLIR. The goal is to be similar to current hardware vendors adding LLVM target support instead of each one also implementing Clang / a C++ frontend.

Release Build

All the roads from PyTorch to Torch MLIR Dialect

We have few paths to lower down to the Torch MLIR Dialect.

Simplified Architecture Diagram for README

  • TorchScript This is the most tested path down to Torch MLIR Dialect, and the PyTorch ecosystem is converging on using TorchScript IR as a lingua franca.
  • LazyTensorCore Read more details here.

Project Communication

  • #torch-mlir channel on the LLVM Discord - this is the most active communication channel
  • Github issues here
  • torch-mlir section of LLVM Discourse
  • Weekly meetings on Mondays 9AM PST. See here for more information.
  • Weekly op office hours on Thursdays 8:30-9:30AM PST. See here for more information.

Install torch-mlir snapshot

This installs a pre-built snapshot of torch-mlir for Python 3.7/3.8/3.9/3.10 on Linux and macOS.

python -m venv mlir_venv
source mlir_venv/bin/activate
# Some older pip installs may not be able to handle the recent PyTorch deps
python -m pip install --upgrade pip
pip install --pre torch-mlir torchvision -f https://llvm.github.io/torch-mlir/package-index/ --extra-index-url https://download.pytorch.org/whl/nightly/cpu
# This will install the corresponding torch and torchvision nightlies

Demos

TorchScript ResNet18

Standalone script to Convert a PyTorch ResNet18 model to MLIR and run it on the CPU Backend:

# Get the latest example if you haven't checked out the code
wget https://raw.githubusercontent.com/llvm/torch-mlir/main/examples/torchscript_resnet18.py

# Run ResNet18 as a standalone script.
python examples/torchscript_resnet18.py

load image from https://upload.wikimedia.org/wikipedia/commons/2/26/YellowLabradorLooking_new.jpg
Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /home/mlir/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth
100.0%
PyTorch prediction
[('Labrador retriever', 70.66319274902344), ('golden retriever', 4.956596374511719), ('Chesapeake Bay retriever', 4.195662975311279)]
torch-mlir prediction
[('Labrador retriever', 70.66320037841797), ('golden retriever', 4.956601619720459), ('Chesapeake Bay retriever', 4.195651531219482)]

Lazy Tensor Core

View examples here.

Eager Mode

Eager mode with TorchMLIR is a very experimental eager mode backend for PyTorch through the torch-mlir framework. Effectively, this mode works by compiling operator by operator as the NN is eagerly executed by PyTorch. This mode includes a fallback to conventional PyTorch if anything in the torch-mlir compilation process fails (e.g., unsupported operator). A simple example can be found at eager_mode.py. A ResNet18 example can be found at eager_mode_resnet18.py.

Repository Layout

The project follows the conventions of typical MLIR-based projects:

  • include/torch-mlir, lib structure for C++ MLIR compiler dialects/passes.
  • test for holding test code.
  • tools for torch-mlir-opt and such.
  • python top level directory for Python code

Developers

If you would like to develop and build torch-mlir from source please look at Development Notes

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

torch_mlir-20221206.71-cp310-cp310-win_amd64.whl (22.4 MB view details)

Uploaded CPython 3.10 Windows x86-64

torch_mlir-20221206.71-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (219.6 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.17+ x86-64

torch_mlir-20221206.71-cp310-cp310-macosx_11_0_universal2.whl (179.9 MB view details)

Uploaded CPython 3.10 macOS 11.0+ universal2 (ARM64, x86-64)

torch_mlir-20221206.71-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (219.7 MB view details)

Uploaded CPython 3.7m manylinux: glibc 2.17+ x86-64

File details

Details for the file torch_mlir-20221206.71-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for torch_mlir-20221206.71-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 8fc37ef28bf5126d0e0321ae2f4b667e76c3b743271e2ff6833828273ed9b96a
MD5 99f4a0d4b3ad0ae35500f8b218eef597
BLAKE2b-256 6a62b55bbbbf1351ce95d5f7b7b68ef0d196c83e908a43e1c6b657217954f4c1

See more details on using hashes here.

File details

Details for the file torch_mlir-20221206.71-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torch_mlir-20221206.71-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 b1c6de93fa4d613d0bba6cbb36bfeaa9448b4a5606fff10eb57c71862f8207d8
MD5 ec7dedd82e804581e2da475696cabd93
BLAKE2b-256 6f3b3224cbf0bf72b6bbdd850c2f1c172de56400897f30acc1c68bc414e9ab58

See more details on using hashes here.

File details

Details for the file torch_mlir-20221206.71-cp310-cp310-macosx_11_0_universal2.whl.

File metadata

File hashes

Hashes for torch_mlir-20221206.71-cp310-cp310-macosx_11_0_universal2.whl
Algorithm Hash digest
SHA256 9061786921a9e7429bdea7264dcc51f4159a74eebd568f86339e34a7b3545027
MD5 3aba30dace29e0a73846134de5a1460c
BLAKE2b-256 c97a528a339884ee5f7f31e5e0b97839784de51a73cd062fea27a6d9f3da3438

See more details on using hashes here.

File details

Details for the file torch_mlir-20221206.71-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torch_mlir-20221206.71-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 84c5c9a4ff4dcc589408492b44cbfb0a18f03c80f6712a4db9facbf666e778e1
MD5 8c5f49ea7265156920896353c93537ab
BLAKE2b-256 997b0e4481936460c189812c130d82298a22a27c4f2dab5ad95f7568d2492dc1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page