GPUStack Runner is library for registering runnable accelerated backends and services in GPUStack.
Project description
GPUStack Runner
This repository serves as the Docker image pack center for GPUStack Runner. It provides a collection of Dockerfiles to build images for various inference services across different accelerated backends.
Agenda
- Onboard Services
- Directory Structure
- Dockerfile Convention
- Docker Image Naming Convention
- Integration Process
Onboard Services
[!TIP]
- The list below shows the accelerated backends and inference services available in the latest release. For support of backends or services not shown here, please refer to previous release tags.
- Deprecated inference service versions in the latest release are marked with
strikethroughformatting. They may still be available in previous releases, and not recommended for new deployments.- Polished inference service versions in the latest release are marked with bold formatting. If they are using in your deployment, it is recommended to pull the latest images and upgrade.
The following table lists the supported accelerated backends and their corresponding inference services with versions.
Ascend CANN
[!WARNING]
| CANN Version (Variant) |
MindIE | vLLM | SGLang |
|---|---|---|---|
| 8.5 (A3/910C) | 2.3.0 |
0.16.0(rc), 0.15.0(rc), 0.14.1(rc), 0.13.0 |
0.5.9, 0.5.8.post1 |
| 8.5 (910B) | 2.3.0 |
0.16.0(rc), 0.15.0(rc), 0.14.1(rc), 0.13.0 |
0.5.9, 0.5.8.post1 |
| 8.5 (310P) | 2.3.0 |
0.16.0(rc), 0.15.0(rc), 0.14.1(rc) |
|
| 8.3 (A3/910C) | 2.2.rc1 |
0.12.0(rc), 0.11.0 |
0.5.7, 0.5.6.post2 |
| 8.3 (910B) | 2.2.rc1 |
0.12.0(rc), 0.11.0 |
0.5.7, 0.5.6.post2 |
| 8.3 (310P) | 2.2.rc1 |
||
| 8.2 (A3/910C) | 2.1.rc2 |
0.10.2(rc) |
0.5.20.5.1.post3 |
| 8.2 (910B) | 2.1.rc2 |
0.10.2(rc), 0.10.0(rc), 0.9.1 |
0.5.20.5.1.post3 |
| 8.2 (310P) | 2.1.rc2 |
0.10.0(rc), 0.9.1 |
Iluvatar CoreX
| CoreX Version (Variant) |
vLLM |
|---|---|
| 4.2 | 0.8.3 |
NVIDIA CUDA
[!NOTE]
- CUDA 12.9 supports Compute Capabilities:
7.5 8.0+PTX 8.9 9.0 10.0 10.3 12.0 12.1+PTX.- CUDA 12.8 supports Compute Capabilities:
7.5 8.0+PTX 8.9 9.0 10.0+PTX 12.0+PTX.- CUDA 12.6/12.4 supports Compute Capabilities:
7.5 8.0+PTX 8.9 9.0+PTX.
| CUDA Version (Variant) |
vLLM | SGLang | VoxBox |
|---|---|---|---|
| 12.9 | 0.17.1, 0.16.0, 0.15.1, 0.14.1, 0.13.0, 0.12.0, 0.11.2 |
0.5.9, 0.5.8.post1, 0.5.7, 0.5.6.post2 |
|
| 12.8 | 0.17.1, 0.16.0, 0.15.1, 0.14.1, 0.13.0, 0.12.0, 0.11.2, 0.10.2 |
0.5.9, 0.5.8.post1, 0.5.7, 0.5.6.post2, 0.5.5.post3 |
0.0.21 |
| 12.6 | 0.15.1, 0.14.1, 0.13.0, 0.12.0, 0.11.2, 0.10.2 |
0.0.21 |
Hygon DTK
| DTK Version (Variant) |
vLLM |
|---|---|
| 25.04 | 0.11.0, 0.9.2, 0.8.5 |
T-Head HGGC
| HGGC Version (Variant) |
vLLM | SGLang |
|---|---|---|
| 12.3 | 0.12.0, 0.11.1 |
0.5.6, 0.5.5 |
MetaX MACA
| MACA Version (Variant) |
vLLM | SGLang |
|---|---|---|
| 3.5 | 0.14.0 |
|
| 3.3 | 0.13.0, 0.12.0, 0.11.2 |
0.5.7, 0.5.6 |
| 3.2 | 0.10.2 |
|
| 3.0 | 0.9.1 |
MThreads MUSA
| MUSA Version (Variant) |
vLLM | SGLang |
|---|---|---|
| 4.3.2 | 0.5.7 |
|
| 4.1.0 | 0.9.2 |
AMD ROCm
[!NOTE]
- ROCm 7.1/7.0 supports LLVM targets:
gfx908 gfx90a gfx942 gfx950 gfx1030 gfx1100 gfx1101 gfx1200 gfx1201 gfx1150 gfx1151.- ROCm 6.4 supports LLVM targets:
gfx908 gfx90a gfx942 gfx1030 gfx1100.
[!WARNING]
- ROCm 7.0 vLLM
0.11.2are reusing the official ROCm 6.4 PyTorch 2.9 wheel package rather than a ROCm 7.0 specific PyTorch build. Although supports ROCm 7.0 in vLLM0.11.2,gfx1150/gfx1151are not supported yet.- ROCm 6.4 vLLM
0.13.0supportsgfx903 gfx90a gfx942only.- ROCm 6.4 SGLang supports
gfx942only.- ROCm 7.0 SGLang supports
gfx950only.
| ROCm Version (Variant) |
vLLM | SGLang |
|---|---|---|
| 7.1 | 0.17.1 |
|
| 7.0 | 0.16.0, 0.15.1, 0.14.1, 0.13.0, 0.12.0, 0.11.2 |
0.5.9, 0.5.8.post1, 0.5.7, 0.5.6.post2 |
| 6.4 | 0.16.0, 0.15.1, 0.14.1, 0.13.0, 0.12.0, 0.11.2, 0.10.2 |
0.5.8.post1, 0.5.7, 0.5.6.post2, 0.5.5.post3 |
Directory Structure
The pack skeleton is organized by backend:
pack
├── {BACKEND 1}
│ └── Dockerfile
├── {BACKEND 2}
│ └── Dockerfile
├── {BACKEND 3}
│ └── Dockerfile
├── ...
│ └── Dockerfile
└── {BACKEND N}
└── Dockerfile
Dockerfile Convention
Each Dockerfile follows these conventions:
- Begin with comments describing the package logic in steps and usage of build arguments (
ARGs). - Use
ARGfor all required and optional build arguments. If a required argument is unused, mark it as(PLACEHOLDER). - Use heredoc syntax for
RUNcommands to improve readability.
Example Dockerfile Structure
# Describe package logic and ARG usage.
#
ARG PYTHON_VERSION=... # REQUIRED
ARG CMAKE_MAX_JOBS=... # REQUIRED
ARG {OTHERS} # OPTIONAL
ARG {BACKEND}_VERSION=... # REQUIRED
ARG {BACKEND}_VERSION_EXTRA=... # OPTIONAL
ARG {BACKEND}_ARCHS=... # REQUIRED
ARG {BACKEND}_{OTHERS}=... # OPTIONAL
ARG {SERVICE}_BASE_IMAGE=... # REQUIRED
ARG {SERVICE}_VERSION=... # REQUIRED
ARG {SERVICE}_{OTHERS}=... # OPTIONAL
ARG {SERVICE}_{FRAMEWORK}_VERSION=... # REQUIRED
ARG {SERVICE}_{FRAMEWORK}_{OTHERS}=... # OPTIONAL
# Stage Bake Runtime
FROM {BACKEND DEVEL IMAGE} AS runtime
SHELL ["/bin/bash", "-eo", "pipefail", "-c"]
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG ...
RUN <<EOF
# TODO: install runtime dependencies
EOF
# Stage Install Service
FROM {BACKEND}_BASE_IMAGE AS {service}
SHELL ["/bin/bash", "-eo", "pipefail", "-c"]
ARG TARGETPLATFORM
ARG TARGETOS
ARG TARGETARCH
ARG ...
RUN <<EOF
# TODO: install service and dependencies
EOF
WORKDIR /
ENTRYPOINT [ "tini", "--" ]
Docker Image Naming Convention
The Docker image naming convention is as follows:
- Multi-architecture image names:
{NAMESPACE}/{REPOSITORY}:{TAG}. - Single-architecture image tags:
{BACKEND}{BACKEND_VERSION%.*}[-{BACKEND_VARIANT}]-{SERVICE}{SERVICE_VERSION}-{OS}-{ARCH}. - Multi-architecture image tags:
{BACKEND}{BACKEND_VERSION%.*}[-{BACKEND_VARIANT}]-{SERVICE}{SERVICE_VERSION}[-dev]. - All names adn tags must be lowercase.
Example
- NAMESPACE:
gpustack - REPOSITORY:
runner
| Accelerated Backend | OS/ARCH | Inference Service | Single-Arch Image Name | Multi-Arch Image Name |
|---|---|---|---|---|
| Ascend CANN 910b | linux/amd64 | vLLM | gpustack/runner:cann8.1-910b-vllm0.9.2-linux-amd64 |
gpustack/runner:cann8.1-910b-vllm0.9.2 |
| Ascend CANN 910b | linux/arm64 | vLLM | gpustack/runner:cann8.1-910b-vllm0.9.2-linux-arm64 |
gpustack/runner:cann8.1-910b-vllm0.9.2 |
| NVIDIA CUDA 12.8 | linux/amd64 | vLLM | gpustack/runner:cuda12.8-910b-vllm0.9.2-linux-amd64 |
gpustack/runner:cuda12.8-910b-vllm0.9.2 |
| NVIDIA CUDA 12.8 | linux/arm64 | vLLM | gpustack/runner:cuda12.8-910b-vllm0.9.2-linux-arm64 |
gpustack/runner:cuda12.8-910b-vllm0.9.2 |
Build and Release Workflow
- Build single architecture images for OS/ARCH, e.g.
gpustack/runner:cann8.1-910b-vllm0.9.2-linux-amd64. - Combine single-architecture images into a multiple architectures image, e.g.
gpustack/runner:cann8.1-910b-vllm0.9.2-dev. - After testing, rename the multi-architecture image to the final tag, e.g.
gpustack/runner:cann8.1-910b-vllm0.9.2.
Integration Process
Ingesting a New Accelerated Backend
To add support for a new accelerated backend:
- Create a new directory under
pack/named with the new backend. - Add a
Dockerfilein the new directory following the Dockerfile Convention. - Update pack.yml, discard.yml and prune.yml to include the new backend in the build matrix.
- Update matrix.yml to include the new backend and its variants.
- Update
_RE_DOCKER_IMAGEin runner.py to recognize the new backend. - [Optional] Update tests if necessary.
Ingesting a New Inference Service
To add support for a new inference service:
- Modify the
Dockerfileof the relevant backend inpack/{BACKEND}/Dockerfileto include the new service. - Update pack.yml to include the new service in the build matrix.
- Update matrix.yml to include the new service.
- Update
_RE_DOCKER_IMAGEin runner.py to recognize the new service. - [Optional] Update tests if necessary.
License
Copyright (c) 2025 The GPUStack authors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file gpustack_runner-0.1.25.post7.tar.gz.
File metadata
- Download URL: gpustack_runner-0.1.25.post7.tar.gz
- Upload date:
- Size: 13.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.24
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b148a83934baec4913afcc4e330214d63f72e9a7eddd066e8173ab4df27f049e
|
|
| MD5 |
bb890ff196efdd01c0258aba8eea4b63
|
|
| BLAKE2b-256 |
dce01a0ad4e63e700a87360cb461800ee843f2856a6f9ada6f6df581d6e3581e
|
File details
Details for the file gpustack_runner-0.1.25.post7-py3-none-any.whl.
File metadata
- Download URL: gpustack_runner-0.1.25.post7-py3-none-any.whl
- Upload date:
- Size: 28.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.24
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c14e391c3530a4b215842955397252619c76c4a967e41ae825c099c1fd6d1557
|
|
| MD5 |
4c5d7518364fdd0447bf93bdb96d66c6
|
|
| BLAKE2b-256 |
d5bc17403ca4ea3051a8121e0a41c4f366e91a7971e2a62ac0e254ca29b96c4f
|