Automatic configuration planner for vLLM with PyTorch-based GPU probing and intelligent memory management
Project description
vllm-autoconfig
Automatic configuration planner for vLLM - Eliminate the guesswork of configuring vLLM by automatically determining optimal parameters based on your GPU hardware and model requirements.
๐ Features
- Zero-configuration vLLM setup: Automatically calculates optimal
max_model_len,gpu_memory_utilization, and other vLLM parameters - Hardware-aware planning: Probes GPU memory and capabilities using PyTorch to ensure configurations fit your hardware
- Model-specific optimizations: Applies model-family-specific settings (Mistral, Llama, Qwen, etc.)
- KV cache sizing: Intelligently calculates memory requirements for attention key-value caches
- Configuration caching: Saves computed plans to avoid redundant calculations
- Performance modes: Choose between
throughputandlatencyoptimization strategies - FP8 KV cache support: Automatically enables FP8 quantization for KV caches when beneficial
- Simple API: Just specify your model name and desired context length - everything else is handled automatically
๐ฆ Installation
pip install vllm-autoconfig
Requirements:
- Python >= 3.10
- PyTorch with CUDA support
- vLLM
- Access to CUDA-capable GPU(s)
๐ฏ Quick Start
Python API
from vllm_autoconfig import AutoVLLMClient, SamplingConfig
# Initialize with your model and desired context length
client = AutoVLLMClient(
model_name="meta-llama/Llama-3.1-8B-Instruct",
context_len=1024, # The ONLY parameter you need to set!
)
# Prepare your prompts
prompts = [
{
"messages": [
{"role": "user", "content": "What is the capital of France?"}
],
"metadata": {"id": 1},
}
]
# Run inference
results = client.run_batch(
prompts,
SamplingConfig(max_tokens=100, temperature=0.7)
)
print(results)
client.close()
Advanced Usage
from vllm_autoconfig import AutoVLLMClient, SamplingConfig
# Fine-tune the configuration
client = AutoVLLMClient(
model_name="mistralai/Mistral-7B-Instruct-v0.3",
context_len=2048,
perf_mode="latency", # or "throughput" (default)
prefer_fp8_kv_cache=True, # Enable FP8 KV cache if supported
trust_remote_code=False, # For models requiring custom code
debug=True, # Enable detailed logging
)
# Check the computed plan
print(f"Plan cache key: {client.plan.cache_key}")
print(f"vLLM kwargs: {client.plan.vllm_kwargs}")
print(f"Notes: {client.plan.notes}")
# Run inference with custom sampling
sampling = SamplingConfig(
temperature=0.8,
top_p=0.95,
max_tokens=256,
stop=["###", "\n\n"]
)
results = client.run_batch(prompts, sampling)
client.close()
๐ ๏ธ How It Works
- GPU Probing: Detects available GPU memory and capabilities (BF16 support, compute capability)
- Model Analysis: Downloads model configuration from HuggingFace Hub and analyzes architecture
- Weight Calculation: Computes actual model weight size from checkpoint files
- Memory Planning: Calculates KV cache memory requirements based on context length and batch size
- Configuration Generation: Produces optimal vLLM initialization parameters within hardware constraints
- Caching: Saves the computed plan for reuse with the same configuration
๐ Configuration Parameters
The AutoVLLMClient automatically configures:
model: Model name/pathmax_model_len: Maximum sequence lengthgpu_memory_utilization: GPU memory usage fractiondtype: Weight precision (bfloat16 or float16)kv_cache_dtype: KV cache precision (including FP8 when beneficial)enforce_eager: Whether to use eager mode (affects compilation)trust_remote_code: Whether to trust remote code execution- Model-specific parameters (e.g.,
tokenizer_mode,load_formatfor Mistral)
๐๏ธ API Reference
AutoVLLMClient
AutoVLLMClient(
model_name: str, # HuggingFace model name or local path
context_len: int, # Desired context length
device_index: int = 0, # GPU device index
perf_mode: str = "throughput", # "throughput" or "latency"
trust_remote_code: bool = False,
prefer_fp8_kv_cache: bool = False,
enforce_eager: bool = False,
local_files_only: bool = False,
cache_plan: bool = True, # Cache computed plans
debug: bool = False, # Enable debug logging
vllm_logging_level: str = None, # vLLM logging level
)
SamplingConfig
SamplingConfig(
temperature: float = 0.0, # Sampling temperature
top_p: float = 1.0, # Nucleus sampling threshold
max_tokens: int = 32, # Maximum tokens to generate
stop: List[str] = None, # Stop sequences
)
Methods
run_batch(prompts, sampling, output_field="output"): Run inference on a batch of promptsclose(): Clean up resources and free GPU memory
๐๏ธ Project Structure
vllm-autoconfig/
โโโ src/vllm_autoconfig/
โ โโโ __init__.py # Package exports
โ โโโ client.py # AutoVLLMClient implementation
โ โโโ planner.py # Configuration planning logic
โ โโโ gpu_probe.py # GPU detection and probing
โ โโโ model_probe.py # Model analysis utilities
โ โโโ kv_math.py # KV cache memory calculations
โ โโโ cache.py # Plan caching utilities
โโโ examples/
โ โโโ simple_run.py # Usage examples
โโโ pyproject.toml
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
- Built on top of vLLM - the high-performance LLM inference engine
- Uses HuggingFace Transformers for model configuration
๐ Citation
If you use vllm-autoconfig in your research or production systems, please cite:
@software{vllm_autoconfig,
title = {vllm-autoconfig: Automatic Configuration Planning for vLLM},
author = {Your Name},
year = {2024},
url = {https://github.com/yourusername/vllm-autoconfig}
}
๐ Issues and Support
For issues, questions, or feature requests, please open an issue on GitHub Issues.
๐ Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vllm_speculative_autoconfig-0.1.1.tar.gz.
File metadata
- Download URL: vllm_speculative_autoconfig-0.1.1.tar.gz
- Upload date:
- Size: 18.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2e32d09e577fa8eb7c8f02e3f2b0e8e9faf30934be725f0a8ccb97205f7d9bca
|
|
| MD5 |
45b65500a500ac5827c9d5f62d12cc68
|
|
| BLAKE2b-256 |
3379a042da002b62c8237e27a0739046788392ce4eeeb30b4e4db32553ce0db6
|
File details
Details for the file vllm_speculative_autoconfig-0.1.1-py3-none-any.whl.
File metadata
- Download URL: vllm_speculative_autoconfig-0.1.1-py3-none-any.whl
- Upload date:
- Size: 18.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
881a87848353d64f6e986a61b8e9fef432ffdacea7ba1ac2a924348e2e5c8315
|
|
| MD5 |
0d48587e5fa61edc3f41318e105185aa
|
|
| BLAKE2b-256 |
a4ffe3b124861d2676dea23355a79c5e93005d8011b63eda72053773db7174e8
|