A web interface for managing and interacting with vLLM servers
Project description
vLLM Playground
A modern web interface for managing and interacting with vLLM servers (www.github.com/vllm-project/vllm). Supports GPU and CPU modes, with special optimizations for macOS Apple Silicon and enterprise deployment on OpenShift/Kubernetes.
โจ Agentic-Ready with MCP Support
MCP (Model Context Protocol) integration enables models to use external tools with human-in-the-loop approval.
โจ Tool Calling Support
โจ Structured Outputs Support
๐ What's New in v0.1.1
- ๐ MCP Integration - Connect to MCP servers for agentic capabilities
- ๐ Runtime Detection - Auto-detect Podman, Docker, and vLLM
- โ Human-in-the-Loop - Safe tool execution with manual approval
See Changelog for full details.
๐ Quick Start
# Install from PyPI
pip install vllm-playground
# Pre-download container image (~10GB for GPU)
vllm-playground pull
# Start the playground
vllm-playground
Open http://localhost:7860 and click "Start Server" - that's it! ๐
CLI Options
vllm-playground pull # Pre-download GPU image
vllm-playground pull --cpu # Pre-download CPU image
vllm-playground --port 8080 # Custom port
vllm-playground stop # Stop running instance
vllm-playground status # Check status
โจ Key Features
| Feature | Description |
|---|---|
| ๐ฌ Modern Chat UI | Streamlined ChatGPT-style interface with streaming responses |
| ๐ง Tool Calling | Function calling with Llama, Mistral, Qwen, and more |
| ๐ MCP Integration | Connect to MCP servers for agentic capabilities |
| ๐๏ธ Structured Outputs | Constrain responses to JSON Schema, Regex, or Grammar |
| ๐ณ Container Mode | Zero-setup vLLM via automatic container management |
| โธ๏ธ OpenShift/K8s | Enterprise deployment with dynamic pod creation |
| ๐ Benchmarking | GuideLLM integration for load testing |
| ๐ Recipes | One-click configs from vLLM community recipes |
๐ฆ Installation Options
| Method | Command | Best For |
|---|---|---|
| PyPI | pip install vllm-playground |
Most users |
| With Benchmarking | pip install vllm-playground[benchmark] |
Load testing |
| From Source | git clone + python run.py |
Development |
| OpenShift/K8s | ./openshift/deploy.sh |
Enterprise |
๐ See Installation Guide for detailed instructions.
๐ง Configuration
Tool Calling
Enable in Server Configuration before starting:
- Check "Enable Tool Calling"
- Select parser (or "Auto-detect")
- Start server
- Define tools in the ๐ง toolbar panel
Supported Models:
- Llama 3.x (
llama3_json) - Mistral (
mistral) - Qwen (
hermes) - Hermes (
hermes)
MCP Servers
Connect to external tools via Model Context Protocol:
- Go to MCP Servers in the sidebar
- Add a server (presets available: Filesystem, Git, Fetch, Time)
- Connect and enable in chat panel
โ ๏ธ MCP requires Python 3.10+
CPU Mode (macOS)
Edit config/vllm_cpu.env:
export VLLM_CPU_KVCACHE_SPACE=40
export VLLM_CPU_OMP_THREADS_BIND=auto
๐ Documentation
Getting Started
- Installation Guide - All installation methods
- Quick Start - Get running in minutes
- macOS CPU Guide - Apple Silicon setup
Features
- Features Overview - Complete feature list
- Gated Models Guide - Access Llama, Gemma, etc.
Deployment
- OpenShift/K8s Deployment - Enterprise deployment
- Architecture Overview - System design
- Container Variants - Container options
Reference
- Troubleshooting - Common issues
- Performance Metrics - Benchmarking
- Command Reference - CLI cheat sheet
Releases
- Changelog - Version history and changes
- v0.1.1 - MCP integration, runtime detection
- v0.1.0 - First release, modern UI, tool calling
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโโ
โ User Browser โ
โโโโโโโโโโฌโโโโโโโโโโ
โ http://localhost:7860
โ
โโโโโโโโโโโโโโโโโโโโ
โ Web UI (Host) โ โ FastAPI + JavaScript
โโโโโโโโโโฌโโโโโโโโโโ
โ
โโโโโโดโโโโโ
โ โ
โโโโโโโโ-โโ โโโโโโโโโโ
โ vLLM โ โ MCP โ โ Containers / External Servers
โContainerโ โServers โ
โโโโโโโโโ-โ โโโโโโโโโโ
๐ See Architecture Overview for details.
๐ Quick Troubleshooting
| Issue | Solution |
|---|---|
| Port in use | vllm-playground stop |
| Container won't start | podman logs vllm-service |
| Tool calling fails | Restart with "Enable Tool Calling" checked |
| Image pull errors | vllm-playground pull --all |
๐ See Troubleshooting Guide for more.
๐ Related Projects
- vLLM - High-throughput LLM serving
- LLMCompressor Playground - Model compression & quantization
- GuideLLM - Performance benchmarking
- MCP Servers - Official MCP servers
๐ License
Apache 2.0 License - See LICENSE file for details.
๐ค Contributing
Contributions welcome! Please feel free to submit issues and pull requests.
Made with โค๏ธ for the vLLM community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vllm_playground-0.1.1.tar.gz.
File metadata
- Download URL: vllm_playground-0.1.1.tar.gz
- Upload date:
- Size: 4.6 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd2f4b09f51b5200fd9d0dfffb97b6f3c450592a40a5312c8956b9576b58d4eb
|
|
| MD5 |
53038f1acf1744cf7c5af58a44e090ce
|
|
| BLAKE2b-256 |
6df05ed2bb314f21b67bd2e4580944922e908f1c97849a9f6fe37dd7ead8e0a3
|
File details
Details for the file vllm_playground-0.1.1-py3-none-any.whl.
File metadata
- Download URL: vllm_playground-0.1.1-py3-none-any.whl
- Upload date:
- Size: 4.6 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
374135d663bcb2a6d9a63b1616dfb7e4b9021d6b4cc11937ac946a40429f9bda
|
|
| MD5 |
97cc49b04668628098304c9a6bb7f975
|
|
| BLAKE2b-256 |
741db4807c5d42aec92892674de1505f30595672cd61a6ef22e334a41d0129aa
|