Skip to main content

Tooling for ML in LLVM

Project description

Infrastructure for MLGO - a Machine Learning Guided Compiler Optimizations Framework.

MLGO is a framework for integrating ML techniques systematically in LLVM. It replaces human-crafted optimization heuristics in LLVM with machine learned models. The MLGO framework currently supports two optimizations:

  1. inlining-for-size(LLVM RFC);
  2. register-allocation-for-performance(LLVM RFC)

The compiler components are both available in the main LLVM repository. This repository contains the training infrastructure and related tools for MLGO.

We currently use two different ML algorithms: Policy Gradient and Evolution Strategies to train policies. Currently, this repository only support Policy Gradient training. The release of Evolution Strategies training is on our roadmap.

Check out this demo for an end-to-end demonstration of how to train your own inlining-for-size policy from the scratch with Policy Gradient, or check out this demo for a demonstration of how to train your own regalloc-for-performance policy.

For more details about MLGO, please refer to our paper MLGO: a Machine Learning Guided Compiler Optimizations Framework.

For more details about how to contribute to the project, please refer to contributions.

Pretrained models

We occasionally release pretrained models that may be used as-is with LLVM. Models are released as github releases, and are named as [task]-[major-version].[minor-version].The versions are semantic: the major version corresponds to breaking changes on the LLVM/compiler side, and the minor version corresponds to model updates that are independent of the compiler.

When building LLVM, there is a flag -DLLVM_INLINER_MODEL_PATH which you may set to the path to your inlining model. If the path is set to download, then cmake will download the most recent (compatible) model from github to use. Other values for the flag could be:

# Model is in /tmp/model, i.e. there is a file /tmp/model/saved_model.pb along
# with the rest of the tensorflow saved_model files produced from training.
-DLLVM_INLINER_MODEL_PATH=/tmp/model

# Download the most recent compatible model
-DLLVM_INLINER_MODEL_PATH=download

Prerequisites

Currently, the assumptions for the system are:

  • Recent Ubuntu distro, e.g. 20.04
  • python 3.8.x/3.9.x/3.10.x
  • for local training, which is currently the only supported mode, we recommend a high-performance workstation (e.g. 96 hardware threads).

Training assumes a clang build with ML 'development-mode'. Please refer to:

The model training - specific prerequisites are:

Pipenv:

pip3 install pipenv

The actual dependencies:

pipenv sync --system

Note that the above command will only work from the root of the repository since it needs to have Pipfile.lock in the working directory at the time of execution.

If you plan on doing development work, make sure you grab the development and CI categories of packages as well:

pipenv sync --system --categories "dev-packages ci"

Optionally, to run tests (run_tests.sh), you also need:

sudo apt-get install virtualenv

Note that the same tensorflow package is also needed for building the 'release' mode for LLVM.

Docs

An end-to-end demo using Fuchsia as a codebase from which we extract a corpus and train a model.

How to add a feature guide. Extensibility model.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ml-compiler-opt-0.0.1.dev202311300007.tar.gz (147.8 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file ml-compiler-opt-0.0.1.dev202311300007.tar.gz.

File metadata

File hashes

Hashes for ml-compiler-opt-0.0.1.dev202311300007.tar.gz
Algorithm Hash digest
SHA256 923dd60932895626f575cef369f8e50faabc9503d7e3e1a2b9f5fc2dd7771cda
MD5 934accaee33e7b5732f7574559a037a6
BLAKE2b-256 9e29d946c0f7a88894a500e3d6df1d1e644bcd8efccf53ee891241fd28e4bacf

See more details on using hashes here.

File details

Details for the file ml_compiler_opt-0.0.1.dev202311300007-py3-none-any.whl.

File metadata

File hashes

Hashes for ml_compiler_opt-0.0.1.dev202311300007-py3-none-any.whl
Algorithm Hash digest
SHA256 4fa8b2d1b3a56b7f8537bf26c5985b51655641f5f222ce8d785f88b779bd4b57
MD5 4b89cdd8fc7f9422e5b171d535b1e2b7
BLAKE2b-256 615e71051a884e073f69ca070282c916e3750d3523fcaaca1598a1ff0bd8986e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page