GPUStack
Project description
GPUStack is an open-source GPU cluster manager for running large language models(LLMs).
Key Features
- Supports a Wide Variety of Hardware: Run with different brands of GPUs in Apple MacBooks, Windows PCs, and Linux servers.
- Scales with Your GPU Inventory: Easily add more GPUs or nodes to scale up your operations.
- Distributed Inference: Supports both single-node multi-GPU and multi-node inference and serving.
- Multiple Inference Backends: Supports llama-box (llama.cpp) and vLLM as the inference backends.
- Lightweight Python Package: Minimal dependencies and operational overhead.
- OpenAI-compatible APIs: Serve APIs that are compatible with OpenAI standards.
- User and API key management: Simplified management of users and API keys.
- GPU metrics monitoring: Monitor GPU performance and utilization in real-time.
- Token usage and rate metrics: Track token usage and manage rate limits effectively.
Installation
Linux or MacOS
GPUStack provides a script to install it as a service on systemd or launchd based systems. To install GPUStack using this method, just run:
curl -sfL https://get.gpustack.ai | sh -s -
Windows
Run PowerShell as administrator (avoid using PowerShell ISE), then run the following command to install GPUStack:
Invoke-Expression (Invoke-WebRequest -Uri "https://get.gpustack.ai" -UseBasicParsing).Content
Other Installation Methods
For manual installation, docker installation or detailed configuration options, please refer to the Installation Documentation.
Getting Started
- Run and chat with the llama3.2 model:
gpustack chat llama3.2 "tell me a joke."
- Open
http://myserver
in the browser to access the GPUStack UI. Log in to GPUStack with usernameadmin
and the default password. You can run the following command to get the password for the default setup:
Linux or MacOS
cat /var/lib/gpustack/initial_admin_password
Windows
Get-Content -Path "$env:APPDATA\gpustack\initial_admin_password" -Raw
- Click
Playground
in the navigation menu. Now you can chat with the LLM in the UI playground.
-
Click
API Keys
in the navigation menu, then click theNew API Key
button. -
Fill in the
Name
and click theSave
button. -
Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.
-
Now you can use the API key to access the OpenAI-compatible API. For example, use curl as the following:
export GPUSTACK_API_KEY=myapikey
curl http://myserver/v1-openai/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $GPUSTACK_API_KEY" \
-d '{
"model": "llama3.2",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"stream": true
}'
Supported Platforms
- MacOS
- Linux
- Windows
Supported Accelerators
- Apple Metal
- NVIDIA CUDA
- Ascend CANN
We plan to support the following accelerators in future releases.
- AMD ROCm
- Intel oneAPI
- MTHREADS MUSA
- Qualcomm AI Engine
Supported Models
GPUStack uses llama.cpp and vLLM as the backends and supports a wide range of models. Models from the following sources are supported:
Example language models:
Example multimodal models:
For full list of supported models, please refer to the supported models section in the inference backends documentation.
OpenAI-Compatible APIs
GPUStack serves the following OpenAI compatible APIs under the /v1-openai
path:
- List Models
- Create Completions
- Create Chat Completions
- Create Embeddings
For example, you can use the official OpenAI Python API library to consume the APIs:
from openai import OpenAI
client = OpenAI(base_url="http://myserver/v1-openai", api_key="myapikey")
completion = client.chat.completions.create(
model="llama3.2",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
GPUStack users can generate their own API keys in the UI.
Documentation
Please see the official docs site for complete documentation.
Build
-
Install
python 3.10+
. -
Run
make build
.
You can find the built wheel package in dist
directory.
Contributing
Please read the Contributing Guide if you're interested in contributing to GPUStack.
License
Copyright (c) 2024 The GPUStack authors
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
File details
Details for the file gpustack-0.3.2.tar.gz
.
File metadata
- Download URL: gpustack-0.3.2.tar.gz
- Upload date:
- Size: 4.0 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8babdbd8488d666346aa4c0a9988157e1a879e876b7b4dcc437e8c548bf32812 |
|
MD5 | 6b49b815171d0712f80e278a7ede8255 |
|
BLAKE2b-256 | b4fff82757ee1c3c813bbb327719c4181f1a5642eac141b507ebf63971fd5c3f |
File details
Details for the file gpustack-0.3.2-py3-none-win_amd64.whl
.
File metadata
- Download URL: gpustack-0.3.2-py3-none-win_amd64.whl
- Upload date:
- Size: 7.9 MB
- Tags: Python 3, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 62fe667bd845a52aa8bd88eb627361013b936081d219ebcc9eeebe591eaec3c7 |
|
MD5 | f4b321ebcc4903125d9b149a86b1ef66 |
|
BLAKE2b-256 | 2faab175d0a3299f3336880c06bcecb3c4b1ebf0173da252dd48039e7eb88f8b |
File details
Details for the file gpustack-0.3.2-py3-none-manylinux2014_x86_64.whl
.
File metadata
- Download URL: gpustack-0.3.2-py3-none-manylinux2014_x86_64.whl
- Upload date:
- Size: 4.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 308dcb2568d62294c350a1c9d141d009e5dadbc4e887d4626aaa0c21dd05f935 |
|
MD5 | 8ce8f111a4002b42609028770172f6a3 |
|
BLAKE2b-256 | cdcb7f012162cddb10fbb8b792fe1ce12b4b4e1d8630e7ecc6519db049c8e6de |
File details
Details for the file gpustack-0.3.2-py3-none-manylinux2014_aarch64.whl
.
File metadata
- Download URL: gpustack-0.3.2-py3-none-manylinux2014_aarch64.whl
- Upload date:
- Size: 4.0 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4009153677e3d67715f754aa06a6722b2cefe4254086595c3d45d0c66a69998b |
|
MD5 | 8c30d8135bce315be1953678423459a0 |
|
BLAKE2b-256 | c788b85d18c09b1f494854239aa2cb18c5ee03754a2dc4858ce123bffa6b80e3 |
File details
Details for the file gpustack-0.3.2-py3-none-macosx_11_0_universal2.whl
.
File metadata
- Download URL: gpustack-0.3.2-py3-none-macosx_11_0_universal2.whl
- Upload date:
- Size: 4.0 MB
- Tags: Python 3, macOS 11.0+ universal2 (ARM64, x86-64)
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c211b76b8e52feb48efe8ff486bb1c5e106c48bdac73c927f8eb2fe69b77f49e |
|
MD5 | 477dead6d98166d8c44354de9c546995 |
|
BLAKE2b-256 | 871688060a55f81be35c931bf8ad0e9ccd13da57100da659d826f6be4b893649 |