Skip to main content

GPUStack

Project description

GPUStack

demo

GPUStack is an open-source GPU cluster manager for running large language models(LLMs).

Key Features

  • Supports a Wide Variety of Hardware: Run with different brands of GPUs in Apple MacBooks, Windows PCs, and Linux servers.
  • Scales with Your GPU Inventory: Easily add more GPUs or nodes to scale up your operations.
  • Distributed Inference: Supports both single-node multi-GPU and multi-node inference and serving.
  • Lightweight Python Package: Minimal dependencies and operational overhead.
  • OpenAI-compatible APIs: Serve APIs that are compatible with OpenAI standards.
  • User and API key management: Simplified management of users and API keys.
  • GPU metrics monitoring: Monitor GPU performance and utilization in real-time.
  • Token usage and rate metrics: Track token usage and manage rate limits effectively.

Installation

Linux or MacOS

GPUStack provides a script to install it as a service on systemd or launchd based systems. To install GPUStack using this method, just run:

curl -sfL https://get.gpustack.ai | sh -s -

Optionally, you can add extra workers to form a GPUStack cluster by running the following command on other nodes (replace http://myserver and mytoken with your actual server URL and token):

curl -sfL https://get.gpustack.ai | sh -s - --server-url http://myserver --token mytoken

In the default setup, you can run the following to get the token used for adding workers:

cat /var/lib/gpustack/token

Windows

Run PowerShell as administrator (avoid using PowerShell ISE), then run the following command to install GPUStack:

Invoke-Expression (Invoke-WebRequest -Uri "https://get.gpustack.ai" -UseBasicParsing).Content

Optionally, you can add extra workers to form a GPUStack cluster by running the following command on other nodes (replace http://myserver and mytoken with your actual server URL and token):

Invoke-Expression "& { $((Invoke-WebRequest -Uri 'https://get.gpustack.ai' -UseBasicParsing).Content) } --server-url http://myserver --token mytoken"

In the default setup, you can run the following to get the token used for adding workers:

Get-Content -Path "$env:APPDATA\gpustack\token" -Raw

Manual Installation

For manual installation or detailed configurations, refer to the installation docs.

Getting Started

  1. Run and chat with the llama3 model:
gpustack chat llama3 "tell me a joke."
  1. Open http://myserver in the browser to access the GPUStack UI. Log in to GPUStack with username admin and the default password. You can run the following command to get the password for the default setup:

Linux or MacOS

cat /var/lib/gpustack/initial_admin_password

Windows

Get-Content -Path "$env:APPDATA\gpustack\initial_admin_password" -Raw
  1. Click Playground in the navigation menus. Now you can chat with the LLM in the UI playground.

Playground Screenshot

  1. Click API Keys in the navigation menus, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. Now you can use the API key to access the OpenAI-compatible API. For example, use curl as the following:

export GPUSTACK_API_KEY=myapikey
curl http://myserver/v1-openai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "llama3",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'

Supported Platforms

  • MacOS
  • Linux
  • Windows

Supported Accelerators

  • Apple Metal
  • NVIDIA CUDA

We plan to support the following accelerators in future releases.

  • AMD ROCm
  • Intel oneAPI
  • MTHREADS MUSA
  • Qualcomm AI Engine

Supported Models

GPUStack uses llama.cpp as the backend and supports large language models in GGUF format. Models from the following sources are supported:

  1. Hugging Face

  2. Ollama Library

Here are some example models:

OpenAI-Compatible APIs

GPUStack serves the following OpenAI compatible APIs under the /v1-openai path:

  • List Models
  • Create Completions
  • Create Chat Completions
  • Create Embeddings

For example, you can use the official OpenAI Python API library to consume the APIs:

from openai import OpenAI
client = OpenAI(base_url="http://myserver/v1-openai", api_key="myapikey")

completion = client.chat.completions.create(
  model="llama3",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

GPUStack users can generate their own API keys in the UI.

Documentation

Please see the official docs site for complete documentation.

Build

  1. Install python 3.10+.

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

License

Copyright (c) 2024 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpustack-0.3.0.tar.gz (187.3 MB view details)

Uploaded Source

Built Distributions

gpustack-0.3.0-py3-none-win_amd64.whl (181.5 MB view details)

Uploaded Python 3 Windows x86-64

gpustack-0.3.0-py3-none-manylinux2014_x86_64.whl (190.9 MB view details)

Uploaded Python 3

gpustack-0.3.0-py3-none-manylinux2014_aarch64.whl (190.9 MB view details)

Uploaded Python 3

gpustack-0.3.0-py3-none-macosx_11_0_universal2.whl (16.7 MB view details)

Uploaded Python 3 macOS 11.0+ universal2 (ARM64, x86-64)

File details

Details for the file gpustack-0.3.0.tar.gz.

File metadata

  • Download URL: gpustack-0.3.0.tar.gz
  • Upload date:
  • Size: 187.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.10

File hashes

Hashes for gpustack-0.3.0.tar.gz
Algorithm Hash digest
SHA256 f8f00207822bc960d380c063a467b148b40cf13590f49902a8cf0a72fac2863d
MD5 121bb5eada7a81a0b383b70e348a865e
BLAKE2b-256 5966223ce7cc04be9c311d1c01eb13b661dc469ddbf6f13bf60519775f090856

See more details on using hashes here.

File details

Details for the file gpustack-0.3.0-py3-none-win_amd64.whl.

File metadata

  • Download URL: gpustack-0.3.0-py3-none-win_amd64.whl
  • Upload date:
  • Size: 181.5 MB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for gpustack-0.3.0-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 2b4fe4b39a287d49312c8074d4a67545d8936f79f822390b0720b25ee2e6d817
MD5 3a2bec99b593867d35992a1c8880d96d
BLAKE2b-256 a9ce24f665d0fe948c5c54f867eda7a40c007f4e373d69df75c7abd8904c3f4c

See more details on using hashes here.

File details

Details for the file gpustack-0.3.0-py3-none-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for gpustack-0.3.0-py3-none-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 1043b9bc27e0a2eb0d50b2e419cbefb354e9679cc2e9764e4212d955ca07436b
MD5 b44fe9ac60c566e7997d7c71f8705b04
BLAKE2b-256 963a8e8bb1073fac0c531a10c5205777b19adbd8067a36640fca17d699a23ec5

See more details on using hashes here.

File details

Details for the file gpustack-0.3.0-py3-none-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for gpustack-0.3.0-py3-none-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 484733d7b9e4887596293c076ffea9d197540a5bc1811a769dec763340a159df
MD5 0cf9bb5b128fac0de3dac1def3c8dabd
BLAKE2b-256 a99d2d63f0f8b06603157c94491adfd2ad2084a041c6a70a7ef77742a0976369

See more details on using hashes here.

File details

Details for the file gpustack-0.3.0-py3-none-macosx_11_0_universal2.whl.

File metadata

File hashes

Hashes for gpustack-0.3.0-py3-none-macosx_11_0_universal2.whl
Algorithm Hash digest
SHA256 3e8810156b7a4ab314edb82bc9b145fb9150949a0cd7ec194d1415ce45509ff4
MD5 ecf34c41593479d0ca29ff34c6da3249
BLAKE2b-256 a0e4524027e4759c850b9d8684a64a924f5acc10be1ecc1e7360766b20eb824b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page