Skip to main content

CLI for managing vLLM inference on GPU workstations

Project description

vserve

A CLI for managing vLLM inference on GPU workstations.

Download models. Auto-tune limits. Serve with one command. Tool calling built in.

Python 3.12+ vLLM 0.18+ Tests License


Install

uv tool install vserve

Or with pip:

pip install vserve

Quick Start

vserve init                        # scan GPU, vLLM, CUDA, systemd — write config
vserve download                    # search HuggingFace, pick variant, download
vserve start <model>               # auto-tune + interactive config + serve
vserve start <model> --tools       # enable tool calling (parser auto-detected)

What It Does

vserve manages the full lifecycle of serving LLMs with vLLM on a GPU workstation:

  • Download — search HuggingFace, see available weight variants (FP8, NVFP4, BF16, GGUF) with sizes, download only what you need
  • Auto-tune — calculate exactly what context lengths and concurrency your GPU can handle, based on model architecture and available VRAM. Runs automatically on first start.
  • Tool calling — auto-detects the correct --tool-call-parser and --reasoning-parser from the model's chat template. Supports Qwen, Llama, Mistral, DeepSeek, Gemma 4, GPT-OSS, and more.
  • Start/Stop — interactive config wizard, systemd service management, health check with timeout
  • Fan control — temperature-based curve daemon with quiet hours, or hold a fixed speed
  • Multi-user — session-based GPU ownership prevents other users from disrupting your running model. File-based locking with terminal notifications.
  • Doctor — diagnose GPU, CUDA, vLLM, systemd issues with actionable fix suggestions

Commands

Command Description
vserve Dashboard — GPU, models, status
vserve init Auto-discover vLLM and write config
vserve download [model] Search and download from HuggingFace with variant picker
vserve models [name] List models or show detail (fuzzy match)
vserve tune [model] Calculate context/concurrency limits and detect tool capabilities
vserve start [model] Configure and start serving (auto-tunes if needed)
vserve start <model> --tools Start with tool calling enabled (parser auto-detected)
vserve start <model> --tool-parser <p> Override tool-call parser manually
vserve stop Stop the vLLM service
vserve status Show current serving config
vserve fan [auto|off|30-100] GPU fan control with temp-based curve
vserve doctor Check system readiness

All commands support fuzzy matchingvserve start qwen fp8 finds the right model.


Tool Calling

vserve auto-detects the correct vLLM parser by reading the model's chat template:

Model Family Tool Parser Reasoning Parser
Qwen 2.5 hermes
Qwen 3 hermes qwen3
Qwen 3.5 qwen3_coder qwen3
Llama 3.1 / 3.2 / 3.3 llama3_json
Llama 4 llama4_pythonic
Mistral / Mixtral mistral mistral
DeepSeek V3 / R1 deepseek_v3 deepseek_r1
Gemma 4 gemma4 gemma4
GPT-OSS openai openai_gptoss

Detection is template-based (not model-name regex), so it works for fine-tunes and community uploads. Use --tool-parser to override when auto-detection can't determine the parser.


Prerequisites

Requirement Check Install
NVIDIA GPU + drivers nvidia-smi nvidia.com/drivers
CUDA toolkit nvcc --version sudo apt install nvidia-cuda-toolkit
vLLM 0.18+ vllm --version docs.vllm.ai
systemd (most Linux servers) See troubleshooting
sudo access for systemctl, fan control

Configuration

Auto-discovered on first run. Override at ~/.config/vserve/config.yaml:

vllm_root: /opt/vllm
cuda_home: /usr/local/cuda
service_name: vllm
service_user: vllm
port: 8888

Fan Control

vserve fan              # show status, interactive menu
vserve fan auto         # temp-based curve with quiet hours
vserve fan 80           # hold at 80% (persistent daemon)
vserve fan off          # stop daemon, restore NVIDIA auto

The auto curve ramps with temperature and caps fan speed during quiet hours (configurable). Emergency override at 88C ignores quiet hours.


Development

git clone https://github.com/Gavin-Qiao/vserve.git
cd vserve
uv sync --dev
uv run pytest tests/              # 205 tests
uv run ruff check src/ tests/     # lint
uv run mypy src/vserve/           # type check

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vserve-0.3.3.tar.gz (82.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vserve-0.3.3-py3-none-any.whl (45.3 kB view details)

Uploaded Python 3

File details

Details for the file vserve-0.3.3.tar.gz.

File metadata

  • Download URL: vserve-0.3.3.tar.gz
  • Upload date:
  • Size: 82.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vserve-0.3.3.tar.gz
Algorithm Hash digest
SHA256 136cc8e99818f1f9bb8ac29f316680bba8e6e570f321b7ee2cc2b126ee410936
MD5 e12c6785055ee9ef9215a78e5a271618
BLAKE2b-256 f7d9f2f1dfbc86ea0174b5cad2da253515f0b4022eb194b5242f997a57b8ba10

See more details on using hashes here.

Provenance

The following attestation bundles were made for vserve-0.3.3.tar.gz:

Publisher: publish.yml on Gavin-Qiao/vserve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file vserve-0.3.3-py3-none-any.whl.

File metadata

  • Download URL: vserve-0.3.3-py3-none-any.whl
  • Upload date:
  • Size: 45.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vserve-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 262486ce0a42edf9f431ba3b3f5a0cb9f0c949a901e004ca7d050ddf55674920
MD5 dc71bcff66f3c1e8c898183982040bc3
BLAKE2b-256 a06451d3e35606bcd447df773cac01dd545dc84cee0431ebd8c6f019530e4880

See more details on using hashes here.

Provenance

The following attestation bundles were made for vserve-0.3.3-py3-none-any.whl:

Publisher: publish.yml on Gavin-Qiao/vserve

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page