A high-throughput and memory-efficient inference and serving engine for LLMs
Project description
Easy, fast, and cheap LLM serving for everyone
| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |
🔥 We have built a vllm website to help you get started with vllm. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.
About
vLLM is a fast and easy-to-use library for LLM inference and serving.
Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
vLLM is fast with:
- State-of-the-art serving throughput
- Efficient management of attention key and value memory with PagedAttention
- Continuous batching of incoming requests
- Fast model execution with CUDA/HIP graph
- Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
- Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
- Speculative decoding
- Chunked prefill
vLLM is flexible and easy to use with:
- Seamless integration with popular Hugging Face models
- High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
- Tensor, pipeline, data and expert parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support for NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, Arm CPUs, and TPU. Additionally, support for diverse hardware plugins such as Intel Gaudi, IBM Spyre and Huawei Ascend.
- Prefix caching support
- Multi-LoRA support
vLLM seamlessly supports most popular open-source models on HuggingFace, including:
- Transformer-like LLMs (e.g., Llama)
- Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
- Embedding Models (e.g., E5-Mistral)
- Multi-modal LLMs (e.g., LLaVA)
Find the full list of supported models here.
Getting Started
Install vLLM with pip or from source:
pip install vllm
Workspace Ascend Plugin (vllm-ascend)
For this multi-root workspace, you can install local vllm-ascend as a
platform plugin for vllm-hust with:
cd /home/shuhao/vllm-ascend-hust
bash scripts/install_local_ascend_plugin.sh
If your vllm-ascend repo is in a different location:
bash /home/shuhao/vllm-ascend-hust/scripts/install_local_ascend_plugin.sh /path/to/vllm-ascend
This script installs vllm-ascend in editable mode and verifies that entry
points under vllm.platform_plugins are discoverable.
It defaults to lightweight mode (COMPILE_CUSTOM_KERNELS=0, --no-deps) so
you can wire the plugin in workspace even when Ascend custom-op toolchain is
not fully configured.
Avoid Mixed Ascend Runtime (Recommended)
To avoid mixing multiple CANN/Ascend toolkit trees in one shell session, always source a single runtime first:
cd /home/shuhao/vllm-ascend-hust
source scripts/use_single_ascend_env.sh /usr/local/Ascend/ascend-toolkit.bak.8.1/latest
The script now also loads /usr/local/Ascend/nnal/atb/set_env.sh to ensure
ATB operator runtime variables are configured. If this file is missing, install
NNAL/ATB package first.
Then run the benchmark through the wrapper (it sources the same environment script internally):
bash /home/shuhao/vllm-ascend-hust/scripts/run_ascend_latency_bench.sh /usr/local/Ascend/ascend-toolkit.bak.8.1/latest
If you omit the path, scripts use a default toolkit root suitable for this workspace.
One-Click Ascend Bootstrap
To make local Ascend deployment closer to a one-command flow, use:
cd /home/shuhao/vllm-ascend-hust
bash scripts/bootstrap_ascend.sh Qwen/Qwen2.5-1.5B-Instruct
Separate Local OpenAI Server Command
If you only want to start the local vllm-hust OpenAI-compatible server on Ascend,
use the native vllm-hust serve command directly instead of going through workstation:
By default, vllm-hust now auto-injects minimal Ascend runtime paths at import
time (ASCEND_HOME_PATH, LD_LIBRARY_PATH, PATH) so pip-installed users can
start directly without manually sourcing set_env.sh in common single-toolkit
setups.
If you prefer strict manual environment control, disable this behavior with:
export VLLM_ASCEND_AUTO_ENV=0
For multi-toolkit or customized runtime setups, manual sourcing is still recommended:
cd /home/shuhao/vllm-ascend-hust
source scripts/use_single_ascend_env.sh /usr/local/Ascend/ascend-toolkit.bak.8.1/latest
export PYTHONPATH="/usr/local/Ascend/ascend-toolkit.bak.8.1/latest/python/site-packages:${PYTHONPATH:-}"
vllm-hust serve Qwen/Qwen2.5-1.5B-Instruct \
--host 0.0.0.0 \
--port 8080 \
--enforce-eager \
-cc.cudagraph_mode=0 \
--enable-auto-tool-choice \
--tool-call-parser pythonic \
--no-enable-prefix-caching \
--no-enable-chunked-prefill
If you are serving a local snapshot, replace the model argument directly:
vllm-hust serve /path/to/local/model \
--host 0.0.0.0 \
--port 8080 \
--enforce-eager \
-cc.cudagraph_mode=0 \
--enable-auto-tool-choice \
--tool-call-parser pythonic \
--no-enable-prefix-caching \
--no-enable-chunked-prefill
Manager integration defaults:
- manager repo path:
/home/shuhao/vllm-hust-dev-hub/ascend-runtime-manager - manager PyPI package:
hust-ascend-manager - disable manager:
HUST_DISABLE_ASCEND_MANAGER=1 - manager strict mode:
HUST_MANAGER_STRICT=1 - manager system install steps:
HUST_MANAGER_APPLY_SYSTEM=1 - manager PyPI override:
HUST_ASCEND_MANAGER_PYPI_SPEC='hust-ascend-manager==0.1.0'
If the active Python environment cannot import vllm-hust cleanly, use
ascend-runtime-manager as the single runtime repair entrypoint instead of
manually guessing a sequence of pip install commands:
cd /home/shuhao/vllm-hust-dev-hub/ascend-runtime-manager
PYTHONPATH=src python -m hust_ascend_manager.cli runtime check --repo /home/shuhao/vllm-hust
PYTHONPATH=src python -m hust_ascend_manager.cli runtime repair --repo /home/shuhao/vllm-hust
runtime repair is responsible for reconciling the Python-side runtime only:
torch,transformers,tokenizers,huggingface_hubrequirements/common.txtandrequirements/build.txt- local editable rebuild against the currently selected Python / torch env
It does not replace host-level fixes such as drivers, CANN / NNAL / ATB system packages, model downloads, or public ingress setup.
If you need strict npugraph_ex validation, set HUST_REQUIRE_NPUGRAPH=1
before running the script.
Visit our documentation to learn more.
Contributing
We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.
Citation
If you use vLLM for your research, please cite our paper:
@inproceedings{kwon2023efficient,
title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
year={2023}
}
Contact Us
- For technical questions and feature requests, please use GitHub Issues
- For discussing with fellow users, please use the vLLM Forum
- For coordinating contributions and development, please use Slack
- For security disclosures, please use GitHub's Security Advisories feature
- For collaborations and partnerships, please contact us at collaboration@vllm.ai
Media Kit
- If you wish to use vLLM's logo, please refer to our media kit repo
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file vllm_hust-0.17.2.post1.tar.gz.
File metadata
- Download URL: vllm_hust-0.17.2.post1.tar.gz
- Upload date:
- Size: 31.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4d5de496f5109a312ff217cb267e0a1fe244fdba58e9a1603e317f9d9969fef3
|
|
| MD5 |
fe84b10b260ead76fca2cd620a2399ac
|
|
| BLAKE2b-256 |
2bf38aa223abdef5351278916589b2096cf01359dc93e09ffbf3ad340926af13
|