Skip to main content

A high-throughput and memory-efficient inference and serving engine for LLMs

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vLLM website to help you get started with vLLM. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has grown into one of the most active open-source AI projects built and maintained by a diverse community of many dozens of academic institutions and companies from over 2000 contributors.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests, chunked prefill, prefix caching
  • Fast and flexible model execution with piecewise and full CUDA/HIP graphs
  • Quantization: FP8, MXFP8/MXFP4, NVFP4, INT8, INT4, GPTQ/AWQ, GGUF, compressed-tensors, ModelOpt, TorchAO, and more
  • Optimized attention kernels including FlashAttention, FlashInfer, TRTLLM-GEN, FlashMLA, and Triton
  • Optimized GEMM/MoE kernels for various precisions using CUTLASS, TRTLLM-GEN, CuTeDSL
  • Speculative decoding including n-gram, suffix, EAGLE, DFlash
  • Automatic kernel generation and graph-level transformations using torch.compile
  • Disaggregated prefill, decode, and encode

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data, expert, and context parallelism for distributed inference
  • Streaming outputs
  • Generation of structured outputs using xgrammar or guidance
  • Tool calling and reasoning parsers
  • OpenAI-compatible API server, plus Anthropic Messages API and gRPC support
  • Efficient multi-LoRA support for dense and MoE layers
  • Support for NVIDIA GPUs, AMD GPUs, and x86/ARM/PowerPC CPUs. Additionally, diverse hardware plugins such as Google TPUs, Intel Gaudi, IBM Spyre, Huawei Ascend, Rebellions NPU, Apple Silicon, MetaX GPU, and more.

vLLM seamlessly supports 200+ model architectures on Hugging Face, including:

  • Decoder-only LLMs (e.g., Llama, Qwen, Gemma)
  • Mixture-of-Expert LLMs (e.g., Mixtral, DeepSeek-V3, Qwen-MoE, GPT-OSS)
  • Hybrid attention and state-space models (e.g., Mamba, Qwen3.5)
  • Multi-modal models (e.g., LLaVA, Qwen-VL, Pixtral)
  • Embedding and retrieval models (e.g., E5-Mistral, GTE, ColBERT)
  • Reward and classification models (e.g., Qwen-Math)

Find the full list of supported models here.

Getting Started

Install vLLM with uv (recommended) or pip:

uv pip install vllm

Or build from source for development.

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm-0.21.0.tar.gz (34.5 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm-0.21.0-cp38-abi3-manylinux_2_24_x86_64.whl (248.2 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.24+ x86-64

vllm-0.21.0-cp38-abi3-manylinux_2_24_aarch64.whl (239.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.24+ ARM64

vllm-0.21.0-1-cp38-abi3-manylinux_2_24_x86_64.whl (248.2 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.24+ x86-64

vllm-0.21.0-1-cp38-abi3-manylinux_2_24_aarch64.whl (239.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.24+ ARM64

File details

Details for the file vllm-0.21.0.tar.gz.

File metadata

  • Download URL: vllm-0.21.0.tar.gz
  • Upload date:
  • Size: 34.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for vllm-0.21.0.tar.gz
Algorithm Hash digest
SHA256 05ff89c3e926b88b77d7878e317a659ffba678afc21c1d48952037aa5457f058
MD5 0d53053b94a161bc939a0c39a8887149
BLAKE2b-256 97bb8dbba4136f6851470f4324ac665affe55c0b618341ccc42f35a53c5e708e

See more details on using hashes here.

File details

Details for the file vllm-0.21.0-cp38-abi3-manylinux_2_24_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.21.0-cp38-abi3-manylinux_2_24_x86_64.whl
Algorithm Hash digest
SHA256 b241b085742cf04a68c82c089d12afe4d9ee729e0c7f81b2b2b9961d36105ee5
MD5 2cb35ddaf1feb8999ff9771bade68ed0
BLAKE2b-256 a8628cbf7c943b0aca0538d0f5324848a3f256b8284dd4d881cd65ae106c83d7

See more details on using hashes here.

File details

Details for the file vllm-0.21.0-cp38-abi3-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.21.0-cp38-abi3-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 d6e63955b595bd2aa364e90f85c0a2e99573e701146db58394da569ddc6f4eea
MD5 0ef9f63582a030f0b456243aa5226dcd
BLAKE2b-256 59aed78ef0ed561974ea61c6e0786771d3a2a575e22592bd58f2ed52417b9aa2

See more details on using hashes here.

File details

Details for the file vllm-0.21.0-1-cp38-abi3-manylinux_2_24_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.21.0-1-cp38-abi3-manylinux_2_24_x86_64.whl
Algorithm Hash digest
SHA256 f4a75b1391f44c67dc1ca268f5ffed9f6b7fdbc657c93db64e6892c5d1bc320b
MD5 4a63abfba8144aec3906daec07fda75d
BLAKE2b-256 736d9b78990c9fabc70c7731de6af246a420156dc019f66b48da7c86f509c132

See more details on using hashes here.

File details

Details for the file vllm-0.21.0-1-cp38-abi3-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.21.0-1-cp38-abi3-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 dc62135a50dc4b412b4f79549208e782f1665e49e8c13c2d29d2c3d94ff8ac97
MD5 184dae3357ec595709e5400c0926fb52
BLAKE2b-256 ac58564b64d17dde6dc31faae836f98313538c152edf88e2a4fb43b9d551a635

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page