Skip to main content

A high-throughput and memory-efficient inference and serving engine for LLMs

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vLLM website to help you get started with vLLM. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has grown into one of the most active open-source AI projects built and maintained by a diverse community of many dozens of academic institutions and companies from over 2000 contributors.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests, chunked prefill, prefix caching
  • Fast and flexible model execution with piecewise and full CUDA/HIP graphs
  • Quantization: FP8, MXFP8/MXFP4, NVFP4, INT8, INT4, GPTQ/AWQ, GGUF, compressed-tensors, ModelOpt, TorchAO, and more
  • Optimized attention kernels including FlashAttention, FlashInfer, TRTLLM-GEN, FlashMLA, and Triton
  • Optimized GEMM/MoE kernels for various precisions using CUTLASS, TRTLLM-GEN, CuTeDSL
  • Speculative decoding including n-gram, suffix, EAGLE, DFlash
  • Automatic kernel generation and graph-level transformations using torch.compile
  • Disaggregated prefill, decode, and encode

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data, expert, and context parallelism for distributed inference
  • Streaming outputs
  • Generation of structured outputs using xgrammar or guidance
  • Tool calling and reasoning parsers
  • OpenAI-compatible API server, plus Anthropic Messages API and gRPC support
  • Efficient multi-LoRA support for dense and MoE layers
  • Support for NVIDIA GPUs, AMD GPUs, and x86/ARM/PowerPC CPUs. Additionally, diverse hardware plugins such as Google TPUs, Intel Gaudi, IBM Spyre, Huawei Ascend, Rebellions NPU, Apple Silicon, MetaX GPU, and more.

vLLM seamlessly supports 200+ model architectures on Hugging Face, including:

  • Decoder-only LLMs (e.g., Llama, Qwen, Gemma)
  • Mixture-of-Expert LLMs (e.g., Mixtral, DeepSeek-V3, Qwen-MoE, GPT-OSS)
  • Hybrid attention and state-space models (e.g., Mamba, Qwen3.5)
  • Multi-modal models (e.g., LLaVA, Qwen-VL, Pixtral)
  • Embedding and retrieval models (e.g., E5-Mistral, GTE, ColBERT)
  • Reward and classification models (e.g., Qwen-Math)

Find the full list of supported models here.

Getting Started

Install vLLM with uv (recommended) or pip:

uv pip install vllm

Or build from source for development.

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm-0.20.1.tar.gz (33.5 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm-0.20.1-cp38-abi3-manylinux_2_35_x86_64.whl (244.4 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

vllm-0.20.1-cp38-abi3-manylinux_2_35_aarch64.whl (235.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ ARM64

vllm-0.20.1-1-cp38-abi3-manylinux_2_35_x86_64.whl (244.4 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

vllm-0.20.1-1-cp38-abi3-manylinux_2_35_aarch64.whl (235.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ ARM64

File details

Details for the file vllm-0.20.1.tar.gz.

File metadata

  • Download URL: vllm-0.20.1.tar.gz
  • Upload date:
  • Size: 33.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for vllm-0.20.1.tar.gz
Algorithm Hash digest
SHA256 778df065237704d51e8b4cc95bcc6f6a0c2317f13feb38295de3ce76146bf140
MD5 cc6851497d57f47eb2023f98d64694b7
BLAKE2b-256 34b432429ebde396149e5570c9a6b2e3b52dc47261d8b20465cc6ae36c4659fb

See more details on using hashes here.

File details

Details for the file vllm-0.20.1-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.20.1-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 da89341f2a07f49a2571edecbec612071839b04b6d7ea1e3bddb5347d47859df
MD5 87e719f234a0c0aec11af6e74ce9c6a6
BLAKE2b-256 768016ee5d248a795abefd1476d4884394e3d917c68096f8dd696a8216347a74

See more details on using hashes here.

File details

Details for the file vllm-0.20.1-cp38-abi3-manylinux_2_35_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.20.1-cp38-abi3-manylinux_2_35_aarch64.whl
Algorithm Hash digest
SHA256 1cfa5f5654e4c7a8a9f949ef7a9597a63b9736129202c96c006434b57f511223
MD5 66ecaa14721a4c497da6ab0457ab9153
BLAKE2b-256 a7238b48eeee87d9e5589f11bf5139f768053b90c4f85a33cfb72655a30cd14a

See more details on using hashes here.

File details

Details for the file vllm-0.20.1-1-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.20.1-1-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 11907857c94c226caf82ada92ab09b1e0dbf538b5b80a93aed74be29e861020c
MD5 2c27282186d3b1e91ffc438c0605cdce
BLAKE2b-256 f1ae55bb24db43e02a21fe98d64293a309d7cf67054785ac8e8ff2e60e59789e

See more details on using hashes here.

File details

Details for the file vllm-0.20.1-1-cp38-abi3-manylinux_2_35_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.20.1-1-cp38-abi3-manylinux_2_35_aarch64.whl
Algorithm Hash digest
SHA256 4a5c24a94be8413ce682e72d6c22762ac7be64635cf191f4807d30910f928268
MD5 a20576d0ec2ee0cdaef5d55479be117a
BLAKE2b-256 6e2a4bee1ddd867895e63bcca4f4ad78048f6a5d96d0ea241cdb70657cce966e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page