Skip to main content

ML SDK Model Converter

Project description

ML SDK Model Converter

The ML SDK Model Converter is a command line application that translate TOSA ML Models to VGF files. A VGF file is a model file containing SPIR-V™ modules and constants that are required to execute the model through the ML extensions for Vulkan®. The ML SDK Model Converter supports several different TOSA encodings as inputs:

  • TOSA FlatBuffers
  • TOSA MLIR bytecode
  • TOSA MLIR textual format

The ML SDK Model Converter can also produce TOSA FlatBuffers from its input, without performing any conversion.

You can also use the ML SDK Model Converter to check that all tensors specified in the input model are ranked and have fixed, non-dynamic shapes. If a dynamic tensor is detected, the program will exit with an error.

The suggested workflow for this tool as part of the ML SDK for Vulkan® is:

  1. A TOSA MLIR file is converted to a VGF file using the ML SDK Model Converter (this project).
  2. The generated VGF file and VGF library VGF Dump Tool is used to create a JSON scenario template file. The template file is edited with the correct filenames and paths.
  3. Using the generated VGF file and scenario file, the ML SDK Scenario Runner then dispatches the contained SPIR-V™ modules to the ML extensions for Vulkan®.

Cloning the repository

To clone the ML SDK Model Converter as a stand-alone repository, you can use regular git clone commands. However, for better management of dependencies and to ensure everything is placed in the appropriate directories, we recommend using the git-repo tool to clone the repository as part of the ML SDK for Vulkan® suite. Repo tool.

For a minimal build and to initialize only the ML SDK Model Converter and its dependencies, run:

repo init -u https://github.com/arm/ai-ml-sdk-manifest -g model-converter

Alternatively, to initialize the repo structure for the entire ML SDK for Vulkan®, including the ML SDK Model Converter, run:

repo init -u https://github.com/arm/ai-ml-sdk-manifest -g all

After the repo is initialized, you can fetch the contents with:

repo sync --no-clone-bundle

Cloning on Windows®

To ensure nested submodules do not exceed the maximum long path length, you must enable long paths on Windows®, and you must clone close to the root directory or use a symlink. Make sure to use Git for Windows.

Using PowerShell:

Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" -Name "LongPathsEnabled" -Value 1
git config --global core.longpaths true
git --version # Ensure you are using Git for Windows, for example 2.50.1.windows.1
git clone <git-repo-tool-url>
python <path-to-git-repo>\git-repo\repo init -u <manifest-url> -g all
python <path-to-git-repo>\git-repo\repo sync --no-clone-bundle

Using Git Bash:

cmd.exe "/c reg.exe add \"HKLM\System\CurrentControlSet\Control\FileSystem"" /v LongPathsEnabled /t REG_DWORD /d 1 /f"
git config --global core.longpaths true
git --version # Ensure you are using the Git for Windows, for example 2.50.1.windows.1
git clone <git-repo-tool-url>
python <path-to-git-repo>/git-repo/repo init -u <manifest-url> -g all
python <path-to-git-repo>/git-repo/repo sync --no-clone-bundle

After the sync command completes successfully, you can find the ML SDK Model Converter in <repo_root>/sw/model-converter/. You can also find all the dependencies required by the ML SDK Model Converter in :<repo_root>/dependencies/.

Building the ML SDK Model Converter from source

The build system must have:

  • CMake 3.25 or later.
  • C/C++ 17 compiler: GCC, or optionally Clang on Linux and MSVC on Windows®.
  • Python 3.10 or later. Required python libraries for building are listed in tooling-requirements.txt.
  • Ninja 1.10 or later.

The following dependencies are also needed:

For the preferred dependency versions see the manifest file.

Building with the script

Arm® provides a python build script to make build configuration options easily discoverable. When the script is run from a git-repo manifest checkout, the script uses default paths and does not require any additional arguments. Otherwise the paths to the dependencies must be specified.

To build on Linux, run:

SDK_PATH="path/to/sdk"
python3 ${SDK_PATH}/sw/model-converter/scripts/build.py -j $(nproc) \
    --vgf-lib-path ${SDK_PATH}/sw/vgf-lib \
    --flatbuffers-path ${SDK_PATH}/dependencies/flatbuffers \
    --argparse-path ${SDK_PATH}/dependencies/argparse \
    --tosa-tools-path ${SDK_PATH}/dependencies/tosa_tools \
    --external-llvm ${SDK_PATH}/dependencies/llvm-project

To build on Windows®, run:

$env:SDK_PATH="path\to\sdk"
$cores = [System.Environment]::ProcessorCount
python3 "$env:SDK_PATH\sw\model-converter\scripts\build.py" -j $cores `
    --vgf-lib-path "$env:SDK_PATH\sw\vgf-lib" `
    --flatbuffers-path "$env:SDK_PATH\dependencies\flatbuffers" `
    --argparse-path "$env:SDK_PATH\dependencies\argparse" `
    --tosa-tools-path "$env:SDK_PATH\dependencies\tosa_tools" `
    --external-llvm "$env:SDK_PATH\dependencies\llvm-project"

If the components are in their default locations, it is not necessary to specify the --vgf-lib-path, --flatbuffers-path, --argparse-path, --tosa-tools-path, and --external-llvm options.

Tests can be enabled and run with --test and linting by --lint. To enable tests and documentation building python dependencies must be installed:

pip install -r requirements.txt
pip install -r tooling-requirements.txt

The documentation can be built with --doc. To build the documentation, sphinx and doxygen must be installed on the machine.

You can install the project build artifacts into a specified location by passing the option --install with the required path.

To create an archive with the build artifacts option, add --package. The archive will be stored in the provided location."

For more information, see the help output:

python3 scripts/build.py --help

Usage

To generate a VGF file, run:

./build/model-converter --input ${INPUT_TOSA} --output ${OUTPUT_VGF}

To generate a TOSA flatbuffer file, run:

./build/model-converter --tosa-flatbuffer --input ${INPUT_TOSA} --output ${OUTPUT_TOSA_FB}

For more information, see the help output:

./build/model-converter --help

PyPI

The ML SDK Model Converter is available on PyPI as the ai-ml-sdk-model-converter package.

Install:

pip install ai-ml-sdk-model-converter

Known Limitations

  • Usage of the patches/llvm.patch file is temporary until the required changes can be upstreamed to main LLVM Project
  • The emit-debug-info cli option does not produce debug symbols for the SPV_ARM_graph and SPIR-V™ extended instructions for TOSA operators in the generated SPIR-V™ module.

License

The ML SDK Model Converter is distributed under the software licenses in LICENSES directory.

Trademark notice

Arm® is a registered trademarks of Arm Limited (or its subsidiaries) in the US and/or elsewhere.

Khronos®, Vulkan® and SPIR-V™ are registered trademarks of the Khronos® Group.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

ai_ml_sdk_model_converter-0.9.0-py3-none-win_amd64.whl (4.4 MB view details)

Uploaded Python 3Windows x86-64

ai_ml_sdk_model_converter-0.9.0-py3-none-macosx_11_0_arm64.whl (11.5 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

File details

Details for the file ai_ml_sdk_model_converter-0.9.0-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for ai_ml_sdk_model_converter-0.9.0-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 4b2bf51ba5a6592552c2be49e16b0f5df2ed2c60fc16b663a8c205f56a1adcec
MD5 0ce9be91bc54a0d7d305d8ac22492a18
BLAKE2b-256 a3d1c26cf8a7684e7d8979eb399d1c953d472efd6d87cdfa5ef4a08718e1cc42

See more details on using hashes here.

File details

Details for the file ai_ml_sdk_model_converter-0.9.0-py3-none-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ai_ml_sdk_model_converter-0.9.0-py3-none-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 db918fc45b9a80f423ff35d6efb4ce851cdf7e5a430e15bafd9989129a1006a2
MD5 ce1c3c1f07ed397544607d7556aabaf0
BLAKE2b-256 e569bab7c732946e99de5e02d98413dab0cfe38ac9ee593aa179db19a44e6cb4

See more details on using hashes here.

File details

Details for the file ai_ml_sdk_model_converter-0.9.0-py3-none-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for ai_ml_sdk_model_converter-0.9.0-py3-none-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 e6f5b624a367aa91c01504c1c378041a38c0fd64ce469d65659c0fe413d8c66b
MD5 d7a49ae51b9a115d769916537bf0f2bb
BLAKE2b-256 cc984d9a01baefdf527a4ce1cde118ebd2c3fca9be6d4aa426446a3c394fae22

See more details on using hashes here.

File details

Details for the file ai_ml_sdk_model_converter-0.9.0-py3-none-manyLinux2014_x86_64.whl.

File metadata

File hashes

Hashes for ai_ml_sdk_model_converter-0.9.0-py3-none-manyLinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f28f9091c4c443f458bf1202999ed346197085aa53680e28e8688b5dab3daefe
MD5 54a0c6bb094b12f35f047f3846d7cdd2
BLAKE2b-256 fdc94c729dd542dcc0c3803654633026f6cd776984006421aa9c8e95f7cb51ea

See more details on using hashes here.

File details

Details for the file ai_ml_sdk_model_converter-0.9.0-py3-none-manyLinux2014_aarch64.whl.

File metadata

File hashes

Hashes for ai_ml_sdk_model_converter-0.9.0-py3-none-manyLinux2014_aarch64.whl
Algorithm Hash digest
SHA256 aa1dbc4cd9092e4c3477ef3f34313752d41aa56f4e01c5fae5ad159e3bddc38f
MD5 afbced11eff59635fc427fde9dcb5eab
BLAKE2b-256 6e7f350fe03c8ed06767fbdca74c9dd35662c42c20c6d36e4d28f9b7b3f6316f

See more details on using hashes here.

File details

Details for the file ai_ml_sdk_model_converter-0.9.0-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for ai_ml_sdk_model_converter-0.9.0-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e4b3d5cf967abb457cfe567b67937e913e6e46e78dc1a8f320a3a6cd14dd691d
MD5 7ecc93cf98c333456bc8de311355d1e2
BLAKE2b-256 436975ded2ffc5dee088d50bccef728c7013a41173c78b9d43915245d50b56fd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page