Skip to main content

GPUStack

Project description

GPUStack

demo

GPUStack is an open-source GPU cluster manager for running large language models(LLMs).

Key Features:

  • Supports a Wide Variety of Hardware: Run with different brands of GPUs in Apple MacBooks, Windows PCs, and Linux servers.
  • Scales with Your GPU Inventory: Easily add more GPUs or nodes to scale up your operations.
  • Lightweight Python Package: Minimal dependencies and operational overhead.
  • OpenAI-compatible APIs: Serve APIs that are compatible with OpenAI standards.
  • User and API key management: Simplified management of users and API keys.
  • GPU metrics monitoring: Monitor GPU performance and utilization in real-time.
  • Token usage and rate metrics: Track token usage and manage rate limits effectively.

Installation

Linux or MacOS

GPUStack provides a script to install it as a service on systemd or launchd based systems. To install GPUStack using this method, just run:

curl -sfL https://get.gpustack.ai | sh -s -

Optionally, you can add extra workers to form a GPUStack cluster by running the following command on other nodes (replace http://myserver and mytoken with your actual server URL and token):

curl -sfL https://get.gpustack.ai | sh -s - --server-url http://myserver --token mytoken

In the default setup, you can run the following to get the token used for adding workers:

cat /var/lib/gpustack/token

Windows

Run PowerShell as administrator, then run the following command to install GPUStack:

Invoke-Expression (Invoke-WebRequest -Uri "https://get.gpustack.ai" -UseBasicParsing).Content

Optionally, you can add extra workers to form a GPUStack cluster by running the following command on other nodes (replace http://myserver and mytoken with your actual server URL and token):

Invoke-Expression "& { $((Invoke-WebRequest -Uri 'https://get.gpustack.ai' -UseBasicParsing).Content) } -server-url http://myserver -token mytoken"

In the default setup, you can run the following to get the token used for adding workers:

Get-Content -Path (Join-Path -Path $env:APPDATA -ChildPath "gpustack\token") -Raw

Manual Installation

For manual installation or detailed configurations, refer to the installation docs.

Gettting Started

  1. Run and chat with the llama3 model:
gpustack chat llama3 "tell me a joke."
  1. Open http://myserver in the browser to access the GPUStack UI. Log in to GPUStack with username admin and the default password. You can run the following command to get the password for the default setup:
cat /var/lib/gpustack/initial_admin_password
  1. Click Playground in the navigation menus. Now you can chat with the LLM in the UI playground.

Playground Screenshot

  1. Click API Keys in the navigation menus, then click the New API Key button.

  2. Fill in the Name and click the Save button.

  3. Copy the generated API key and save it somewhere safe. Please note that you can only see it once on creation.

  4. Now you can use the API key to access the OpenAI-compatible API. For example, use curl as the following:

export GPUSTACK_API_KEY=myapikey
curl http://myserver/v1-openai/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $GPUSTACK_API_KEY" \
  -d '{
    "model": "llama3",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "Hello!"
      }
    ],
    "stream": true
  }'

Supported Platforms

  • MacOS
  • Linux
  • Windows

Supported Accelerators

  • Apple Metal
  • NVIDIA CUDA

We plan to support the following accelerators in future releases.

  • AMD ROCm
  • Intel oneAPI
  • Qualcomm AI Engine

Supported Models

GPUStack uses llama.cpp as the backend and supports large language models in GGUF format. Models from the following sources are supported:

  1. Hugging Face

  2. Ollama Library

Here are some example models:

OpenAI-Compatible APIs

GPUStack serves the following OpenAI compatible APIs under the /v1-openai path:

  1. List models
  2. Chat completions

For example, you can use the official OpenAI Python API library to consume the APIs:

from openai import OpenAI
client = OpenAI(base_url="http://myserver/v1-openai", api_key="myapikey")

completion = client.chat.completions.create(
  model="llama3",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

GPUStack users can generate their own API keys in the UI.

Build

  1. Install python 3.10+.

  2. Run make build.

You can find the built wheel package in dist directory.

Contributing

Please read the Contributing Guide if you're interested in contributing to GPUStack.

License

Copyright (c) 2024 The GPUStack authors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at LICENSE file for details.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gpustack-0.1.0.tar.gz (72.7 MB view details)

Uploaded Source

Built Distributions

gpustack-0.1.0-py3-none-win_amd64.whl (69.0 MB view details)

Uploaded Python 3 Windows x86-64

gpustack-0.1.0-py3-none-manylinux2014_x86_64.whl (74.0 MB view details)

Uploaded Python 3

gpustack-0.1.0-py3-none-manylinux2014_aarch64.whl (74.0 MB view details)

Uploaded Python 3

gpustack-0.1.0-py3-none-macosx_11_0_universal2.whl (14.2 MB view details)

Uploaded Python 3 macOS 11.0+ universal2 (ARM64, x86-64)

File details

Details for the file gpustack-0.1.0.tar.gz.

File metadata

  • Download URL: gpustack-0.1.0.tar.gz
  • Upload date:
  • Size: 72.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for gpustack-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6da3c211c2ab1459ede04bfb39be4f30ef0385b7096eb9fea802ab539db05631
MD5 cc880503bbb951bda235df74ddcec268
BLAKE2b-256 204d0d0ced5bde4b828b5107890d4dd2161b7ca33ca955b5497c68a89e2489ba

See more details on using hashes here.

File details

Details for the file gpustack-0.1.0-py3-none-win_amd64.whl.

File metadata

  • Download URL: gpustack-0.1.0-py3-none-win_amd64.whl
  • Upload date:
  • Size: 69.0 MB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.9

File hashes

Hashes for gpustack-0.1.0-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 b10539d113fc03de1fd7d46454df5d6bd6c10375d7ceefadaecac772f7158c5b
MD5 2e70b06a8d0d1b1ee5be55915f74c639
BLAKE2b-256 6ae9b2adbc40f4228745d2417a842a29080d5b9cb89c64459ad94be86caa9029

See more details on using hashes here.

File details

Details for the file gpustack-0.1.0-py3-none-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for gpustack-0.1.0-py3-none-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 38a6b2d1c02762a2217abbb1647348befe8c7669a7066f693763070319a1046b
MD5 61196aa168bc433427c55709542b6cd3
BLAKE2b-256 147d515fbb5d26d2b0b305e0c6673244af562426d85d909ec9700e5520eb68c0

See more details on using hashes here.

File details

Details for the file gpustack-0.1.0-py3-none-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for gpustack-0.1.0-py3-none-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 a7877370d47cb72d32c13816fd188a33a47cbd60b9ea7eeb793da45d81708228
MD5 bdf1f7923fe351beb0fca6e3ea87f5ed
BLAKE2b-256 dcca4e6f00a31afd45dc12263f75b9de9e5dc38caaf674ed563f011a483c393e

See more details on using hashes here.

File details

Details for the file gpustack-0.1.0-py3-none-macosx_11_0_universal2.whl.

File metadata

File hashes

Hashes for gpustack-0.1.0-py3-none-macosx_11_0_universal2.whl
Algorithm Hash digest
SHA256 765033fb72829a68070689ca976d98fb1e2b1628f248c8357019627fbcee5c5f
MD5 4b720272be3d2dfb904443ff5a3a3987
BLAKE2b-256 d6e5fb044894e7fa95c844b8a9597c6310d5b916d6430bbe9cf278f2d19a8665

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page