CLI for managing vLLM inference on GPU workstations
Project description
vserve
A CLI for managing vLLM inference on GPU workstations.
Download models. Auto-tune limits. Serve with one command. Tool calling built in.
Install
uv tool install vserve
Or with pip:
pip install vserve
Quick Start
vserve init # scan GPU, vLLM, CUDA, systemd — write config
vserve download # search HuggingFace, pick variant, download
vserve start <model> # auto-tune + interactive config + serve
vserve start <model> --tools # enable tool calling (parser auto-detected)
What It Does
vserve manages the full lifecycle of serving LLMs with vLLM on a GPU workstation:
- Download — search HuggingFace, see available weight variants (FP8, NVFP4, BF16, GGUF) with sizes, download only what you need
- Auto-tune — calculate exactly what context lengths and concurrency your GPU can handle, based on model architecture and available VRAM. Runs automatically on first start.
- Tool calling — auto-detects the correct
--tool-call-parserand--reasoning-parserfrom the model's chat template. Supports Qwen, Llama, Mistral, DeepSeek, Gemma 4, GPT-OSS, and more. - Start/Stop — interactive config wizard, systemd service management, health check with timeout
- Fan control — temperature-based curve daemon with quiet hours, or hold a fixed speed
- Multi-user — session-based GPU ownership prevents other users from disrupting your running model. File-based locking with terminal notifications.
- Doctor — diagnose GPU, CUDA, vLLM, systemd issues with actionable fix suggestions
Commands
| Command | Description |
|---|---|
vserve |
Dashboard — GPU, models, status |
vserve init |
Auto-discover vLLM and write config |
vserve download [model] |
Search and download from HuggingFace with variant picker |
vserve models [name] |
List models or show detail (fuzzy match) |
vserve tune [model] |
Calculate context/concurrency limits and detect tool capabilities |
vserve start [model] |
Configure and start serving (auto-tunes if needed) |
vserve start <model> --tools |
Start with tool calling enabled (parser auto-detected) |
vserve start <model> --tool-parser <p> |
Override tool-call parser manually |
vserve stop |
Stop the vLLM service |
vserve status |
Show current serving config |
vserve fan [auto|off|30-100] |
GPU fan control with temp-based curve |
vserve doctor |
Check system readiness |
All commands support fuzzy matching — vserve start qwen fp8 finds the right model.
Tool Calling
vserve auto-detects the correct vLLM parser by reading the model's chat template:
| Model Family | Tool Parser | Reasoning Parser |
|---|---|---|
| Qwen 2.5 | hermes |
— |
| Qwen 3 | hermes |
qwen3 |
| Qwen 3.5 | qwen3_coder |
qwen3 |
| Llama 3.1 / 3.2 / 3.3 | llama3_json |
— |
| Llama 4 | llama4_pythonic |
— |
| Mistral / Mixtral | mistral |
mistral |
| DeepSeek V3 / R1 | deepseek_v3 |
deepseek_r1 |
| Gemma 4 | gemma4 |
gemma4 |
| GPT-OSS | openai |
openai_gptoss |
Detection is template-based (not model-name regex), so it works for fine-tunes and community uploads. Use --tool-parser to override when auto-detection can't determine the parser.
Prerequisites
| Requirement | Check | Install |
|---|---|---|
| NVIDIA GPU + drivers | nvidia-smi |
nvidia.com/drivers |
| CUDA toolkit | nvcc --version |
sudo apt install nvidia-cuda-toolkit |
| vLLM 0.18+ | vllm --version |
docs.vllm.ai |
| systemd | (most Linux servers) | See troubleshooting |
| sudo access | for systemctl, fan control |
Configuration
Auto-discovered on first run. Override at ~/.config/vserve/config.yaml:
vllm_root: /opt/vllm
cuda_home: /usr/local/cuda
service_name: vllm
service_user: vllm
port: 8888
Fan Control
vserve fan # show status, interactive menu
vserve fan auto # temp-based curve with quiet hours
vserve fan 80 # hold at 80% (persistent daemon)
vserve fan off # stop daemon, restore NVIDIA auto
The auto curve ramps with temperature and caps fan speed during quiet hours (configurable). Emergency override at 88C ignores quiet hours.
Development
git clone https://github.com/Gavin-Qiao/vserve.git
cd vserve
uv sync --dev
uv run pytest tests/ # 205 tests
uv run ruff check src/ tests/ # lint
uv run mypy src/vserve/ # type check
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vserve-0.3.1.tar.gz.
File metadata
- Download URL: vserve-0.3.1.tar.gz
- Upload date:
- Size: 81.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e5f8dd80c534529b2692f62e023a90accb71f11accbb76f0506fa19426d80a43
|
|
| MD5 |
8fe9c821e5250302f3da0a934d7dcbc7
|
|
| BLAKE2b-256 |
a0bea11920d784dd16706c90da9207e499b96bb544f6e9c60b096511d2d8ee55
|
Provenance
The following attestation bundles were made for vserve-0.3.1.tar.gz:
Publisher:
publish.yml on Gavin-Qiao/vserve
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
vserve-0.3.1.tar.gz -
Subject digest:
e5f8dd80c534529b2692f62e023a90accb71f11accbb76f0506fa19426d80a43 - Sigstore transparency entry: 1228746545
- Sigstore integration time:
-
Permalink:
Gavin-Qiao/vserve@9a977760713ba05c960c99a1c5ee48babf898129 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/Gavin-Qiao
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9a977760713ba05c960c99a1c5ee48babf898129 -
Trigger Event:
push
-
Statement type:
File details
Details for the file vserve-0.3.1-py3-none-any.whl.
File metadata
- Download URL: vserve-0.3.1-py3-none-any.whl
- Upload date:
- Size: 44.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b6708b3819198986cbacb9fe3087266d74fc46460461ebb3dcff8ee84250b27b
|
|
| MD5 |
3964de2cd4dc8c0fc59dee405e844d15
|
|
| BLAKE2b-256 |
9a942892c236965a5b22b7a48cf0945a8780c91680ffbc31eeded5c0acc357e9
|
Provenance
The following attestation bundles were made for vserve-0.3.1-py3-none-any.whl:
Publisher:
publish.yml on Gavin-Qiao/vserve
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
vserve-0.3.1-py3-none-any.whl -
Subject digest:
b6708b3819198986cbacb9fe3087266d74fc46460461ebb3dcff8ee84250b27b - Sigstore transparency entry: 1228746573
- Sigstore integration time:
-
Permalink:
Gavin-Qiao/vserve@9a977760713ba05c960c99a1c5ee48babf898129 -
Branch / Tag:
refs/tags/v0.3.1 - Owner: https://github.com/Gavin-Qiao
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9a977760713ba05c960c99a1c5ee48babf898129 -
Trigger Event:
push
-
Statement type: