Skip to main content

A high-throughput and memory-efficient inference and serving engine for LLMs

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

| Documentation | Blog | Paper | Twitter/X | User Forum | Developer Slack |

🔥 We have built a vLLM website to help you get started with vLLM. Please visit vllm.ai to learn more. For events, please visit vllm.ai/events to join us.


About

vLLM is a fast and easy-to-use library for LLM inference and serving.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has grown into one of the most active open-source AI projects built and maintained by a diverse community of many dozens of academic institutions and companies from over 2000 contributors.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests, chunked prefill, prefix caching
  • Fast and flexible model execution with piecewise and full CUDA/HIP graphs
  • Quantization: FP8, MXFP8/MXFP4, NVFP4, INT8, INT4, GPTQ/AWQ, GGUF, compressed-tensors, ModelOpt, TorchAO, and more
  • Optimized attention kernels including FlashAttention, FlashInfer, TRTLLM-GEN, FlashMLA, and Triton
  • Optimized GEMM/MoE kernels for various precisions using CUTLASS, TRTLLM-GEN, CuTeDSL
  • Speculative decoding including n-gram, suffix, EAGLE, DFlash
  • Automatic kernel generation and graph-level transformations using torch.compile
  • Disaggregated prefill, decode, and encode

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data, expert, and context parallelism for distributed inference
  • Streaming outputs
  • Generation of structured outputs using xgrammar or guidance
  • Tool calling and reasoning parsers
  • OpenAI-compatible API server, plus Anthropic Messages API and gRPC support
  • Efficient multi-LoRA support for dense and MoE layers
  • Support for NVIDIA GPUs, AMD GPUs, and x86/ARM/PowerPC CPUs. Additionally, diverse hardware plugins such as Google TPUs, Intel Gaudi, IBM Spyre, Huawei Ascend, Rebellions NPU, Apple Silicon, MetaX GPU, and more.

vLLM seamlessly supports 200+ model architectures on Hugging Face, including:

  • Decoder-only LLMs (e.g., Llama, Qwen, Gemma)
  • Mixture-of-Expert LLMs (e.g., Mixtral, DeepSeek-V3, Qwen-MoE, GPT-OSS)
  • Hybrid attention and state-space models (e.g., Mamba, Qwen3.5)
  • Multi-modal models (e.g., LLaVA, Qwen-VL, Pixtral)
  • Embedding and retrieval models (e.g., E5-Mistral, GTE, ColBERT)
  • Reward and classification models (e.g., Qwen-Math)

Find the full list of supported models here.

Getting Started

Install vLLM with uv (recommended) or pip:

uv pip install vllm

Or build from source for development.

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM for how to get involved.

Citation

If you use vLLM for your research, please cite our paper:

@inproceedings{kwon2023efficient,
  title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
  author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
  booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
  year={2023}
}

Contact Us

  • For technical questions and feature requests, please use GitHub Issues
  • For discussing with fellow users, please use the vLLM Forum
  • For coordinating contributions and development, please use Slack
  • For security disclosures, please use GitHub's Security Advisories feature
  • For collaborations and partnerships, please contact us at collaboration@vllm.ai

Media Kit

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm-0.20.0.tar.gz (33.5 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm-0.20.0-cp38-abi3-manylinux_2_35_x86_64.whl (244.4 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ x86-64

vllm-0.20.0-cp38-abi3-manylinux_2_35_aarch64.whl (235.8 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.35+ ARM64

File details

Details for the file vllm-0.20.0.tar.gz.

File metadata

  • Download URL: vllm-0.20.0.tar.gz
  • Upload date:
  • Size: 33.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for vllm-0.20.0.tar.gz
Algorithm Hash digest
SHA256 a6d50152936ee292455af3ffbe359f7a284ac43bf3b68caccf29f368e196cc72
MD5 5949148f93f0a4eaa6265f60b6ca4ba7
BLAKE2b-256 e7809798ce5e16af5754183ef33a63dc27017e2b51c87f51cc741832ce47a2d5

See more details on using hashes here.

File details

Details for the file vllm-0.20.0-cp38-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for vllm-0.20.0-cp38-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 24d28892e210200f6e1bd13f699c42a74cd2bb7364c11248e2348f677c7f6dfb
MD5 ef44dc1846fa5be3e86d8d1ee3e2c256
BLAKE2b-256 47bbcb02d1e9679fce892a674f86caee25acc9ddd64d7dafa4cfe29e899993a8

See more details on using hashes here.

File details

Details for the file vllm-0.20.0-cp38-abi3-manylinux_2_35_aarch64.whl.

File metadata

File hashes

Hashes for vllm-0.20.0-cp38-abi3-manylinux_2_35_aarch64.whl
Algorithm Hash digest
SHA256 29a135ca0d70650f057f15c7c0b560d24659524c771f70fbddc24597c861c118
MD5 4dbfeb23175a76fe2490b019c66a946b
BLAKE2b-256 635b26379d3c522379373e50b9f77adf55eb94f4a0f62a6c8e3e7fe3f0bf0d39

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page