Skip to main content

GPUStack

Project description


GPUStack


Documentation License WeChat Discord Follow on X(Twitter)


English | 简体中文 | 日本語


Overview

GPUStack is an open-source GPU cluster manager designed for efficient AI model deployment. It configures and orchestrates inference engines — vLLM, SGLang, TensorRT-LLM, or your own — to optimize performance across GPU clusters. Its core features include:

  • Multi-Cluster GPU Management. Manages GPU clusters across multiple environments. This includes on-premises servers, Kubernetes clusters, and cloud providers.
  • Pluggable Inference Engines. Automatically configures high-performance inference engines such as vLLM, SGLang, and TensorRT-LLM. You can also add custom inference engines as needed.
  • Day 0 Model Support. GPUStack's pluggable engine architecture enables you to deploy new models on the day they are released.
  • Performance-Optimized Configurations. Offers pre-tuned modes for low latency or high throughput. GPUStack supports extended KV cache systems like LMCache and HiCache to reduce TTFT. It also includes built-in support for speculative decoding methods such as EAGLE3, MTP, and N-grams.
  • Enterprise-Grade Operations. Offers support for automated failure recovery, load balancing, monitoring, authentication, and access control.

Architecture

GPUStack enables development teams, IT organizations, and service providers to deliver Model-as-a-Service at scale. It supports industry-standard APIs for LLM, voice, image, and video models. The platform includes built-in user authentication and access control, real-time monitoring of GPU performance and utilization, and detailed metering of token usage and API request rates.

The figure below illustrates how a single GPUStack server can manage multiple GPU clusters across both on-premises and cloud environments. The GPUStack scheduler allocates GPUs to maximize resource utilization and selects the appropriate inference engines for optimal performance. Administrators also gain full visibility into system health and metrics through integrated Grafana and Prometheus dashboards.

gpustack-v2-architecture

Optimized Inference Performance

GPUStack's automated engine selection and parameter optimization deliver strong inference performance out of the box. The following figure shows throughput improvements over default vLLM configurations:

a100-throughput-comparison

For detailed benchmarking methods and results, visit our Inference Performance Lab.

Supported Accelerators

GPUStack supports a wide range of accelerators for AI inference:

  • NVIDIA GPU
  • AMD GPU
  • Ascend NPU
  • Hygon DCU
  • MThreads GPU
  • Iluvatar GPU
  • MetaX GPU
  • Cambricon MLU
  • T-Head PPU

For detailed requirements and setup instructions, see the Installation Requirements documentation.

Quick Start

Prerequisites

  1. A node with at least one NVIDIA GPU. For other GPU types, please check the guidelines in the GPUStack UI when adding a worker, or refer to the Installation documentation for more details.
  2. Ensure the NVIDIA driver, Docker and NVIDIA Container Toolkit are installed on the worker node.
  3. (Optional) A CPU node for hosting the GPUStack server. The GPUStack server does not require a GPU and can run on a CPU-only machine. Docker must be installed. Docker Desktop (for Windows and macOS) is also supported. If no dedicated CPU node is available, the GPUStack server can be installed on the same machine as a GPU worker node.
  4. Only Linux is supported for GPUStack worker nodes. If you use Windows, consider using WSL2 and avoid using Docker Desktop. macOS is not supported for GPUStack worker nodes.

Install GPUStack

Run the following command to install and start the GPUStack server using Docker:

sudo docker run -d --name gpustack \
    --restart unless-stopped \
    -p 80:80 \
    --volume gpustack-data:/var/lib/gpustack \
    gpustack/gpustack
Alternative: Use Quay Container Registry Mirror

If you cannot pull images from Docker Hub or the download is very slow, you can use our Quay.io mirror by pointing your registry to quay.io:

sudo docker run -d --name gpustack \
    --restart unless-stopped \
    -p 80:80 \
    --volume gpustack-data:/var/lib/gpustack \
    quay.io/gpustack/gpustack \
    --system-default-container-registry quay.io

Check the GPUStack startup logs:

sudo docker logs -f gpustack

After GPUStack starts, run the following command to get the default admin password:

sudo docker exec gpustack cat /var/lib/gpustack/initial_admin_password

Open your browser and navigate to http://your_host_ip to access the GPUStack UI. Use the default username admin and the password you retrieved above to log in.

Set Up a GPU Cluster

  1. On the GPUStack UI, navigate to the Clusters page.

  2. Click the Add Cluster button.

  3. Select Docker as the cluster provider.

  4. Fill in the Name and Description fields for the new cluster, then click the Save button.

  5. Follow the UI guidelines to configure the new worker node. You will need to run a Docker command on the worker node to connect it to the GPUStack server. The command will look similar to the following:

    sudo docker run -d --name gpustack-worker \
          --restart=unless-stopped \
          --privileged \
          --network=host \
          --volume /var/run/docker.sock:/var/run/docker.sock \
          --volume gpustack-data:/var/lib/gpustack \
          --runtime nvidia \
          gpustack/gpustack \
          --server-url http://your_gpustack_server_url \
          --token your_worker_token \
          --advertise-address 192.168.1.2
    
  6. Execute the command on the worker node to connect it to the GPUStack server.

  7. After the worker node connects successfully, it will appear on the Workers page in the GPUStack UI.

Deploy a Model

  1. Navigate to the Catalog page in the GPUStack UI.

  2. Select the Qwen3 0.6B model from the list of available models.

  3. After the deployment compatibility checks pass, click the Save button to deploy the model.

deploy qwen3 from catalog

  1. GPUStack will start downloading the model files and deploying the model. When the deployment status shows Running, the model has been deployed successfully.

model is running

  1. Click Playground - Chat in the navigation menu, check that the model qwen3-0.6b is selected from the top-right Model dropdown. Now you can chat with the model in the UI playground.

quick chat

Use the model via API

  1. Hover over the user avatar and navigate to the API Keys page, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. You can now use the API key to access the OpenAI-compatible API endpoints provided by GPUStack. For example, use curl as the following:

# Replace `your_api_key` and `your_gpustack_server_url`
# with your actual API key and GPUStack server URL.
export GPUSTACK_API_KEY=your_api_key
curl http://your_gpustack_server_url/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "qwen3-0.6b",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Tell me a joke."
      }
    ],
    "stream": true
  }'

Documentation

Please see the official docs site for complete documentation.

Build

  1. Install Python (version 3.10 to 3.12).

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

Join Community

Any issues or have suggestions, feel free to join our Community for support.

License

Copyright (c) 2024-2025 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpustack-2.1.2rc2.tar.gz (12.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

gpustack-2.1.2rc2-py3-none-any.whl (12.6 MB view details)

Uploaded Python 3

File details

Details for the file gpustack-2.1.2rc2.tar.gz.

File metadata

  • Download URL: gpustack-2.1.2rc2.tar.gz
  • Upload date:
  • Size: 12.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.15

File hashes

Hashes for gpustack-2.1.2rc2.tar.gz
Algorithm Hash digest
SHA256 33cce1c675374645dbe73d34b754a956d2ebe26535c593768fd9bfd44e3917b8
MD5 c331e8f52cf784cf08abcfc4c71921e3
BLAKE2b-256 c8b56543024547edd29898f6e5a99d0ea1f723cb083a0ba53272128b297e9d5e

See more details on using hashes here.

File details

Details for the file gpustack-2.1.2rc2-py3-none-any.whl.

File metadata

  • Download URL: gpustack-2.1.2rc2-py3-none-any.whl
  • Upload date:
  • Size: 12.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.15

File hashes

Hashes for gpustack-2.1.2rc2-py3-none-any.whl
Algorithm Hash digest
SHA256 40495704f9d5df6b74b11cbe84af7491dd14b33f4307e2a8d84dfd10dace513e
MD5 cbf30ada75137f3d0276b3f0b6c14daa
BLAKE2b-256 f52187b2e614f64777132f5256e441077b3e42154cc9e776fb7bb49e9b01141b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page