Skip to main content

vLLM CPU inference engine (AVX512 + VNNI optimized)

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

About

vLLM is a fast and easy-to-use library for LLM inference and serving. This PyPl package has VNNI (AVX512+VNNI) inference built in on supported CPUs.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with VNNI on supported CPUs Use this package ONLY IF your CPU have avx512vnni or newer instruction sets
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CPU kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for x86_64, PowerPC CPUs, Arm CPUs and Applie Scilicon (CPU inference). This package does not support any GPU inference. For GPU inference support use the official vLLM PypI
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Importnt Notes

Getting Started

Install vLLM with pip or uv

mkdir -p /path/to/vllm
cd /path/to/vllm
uv venv
uv pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu
uv pip install vllm-cpu-ax512vnni

Install uv on Linux environment using CURL:

curl -LsSf https://astral.sh/uv/install.sh | sh

or using WGET

wget -qO- https://astral.sh/uv/install.sh | sh

if you wanna install a specific version of uv

curl -LsSf https://astral.sh/uv/0.9.11/install.sh | sh

vllm-cpu

This CPU specific vLLM has 5 optimized wheel packages from the upstream vLLM source code:

Package Optimizations Target CPUs
vllm-cpu Baseline (no AVX512) All x86_64 and ARM64 CPUs
vllm-cpu-avx512 AVX512 Intel Skylake-X and newer
vllm-cpu-avx512vnni AVX512 + VNNI Intel Cascade Lake and newer
vllm-cpu-avx512bf16 AVX512 + VNNI + BF16 Intel Cooper Lake and newer
vllm-cpu-amxbf16 AVX512 + VNNI + BF16 + AMX Intel Sapphire Rapids (4th gen Xeon+)

Each package is compiled with specific CPU instruction set flags for optimal inference performance.

Check available CPU instruction sets

lscpu | grep -i flags

Example list of CPUs with their supported instruction sets

CPU Architecture (Intel/AMD) AVX2 AVX-512 F (Base) VNNI (INT8) BF16 (BFloat16) (via AVX-512) AMX-BF16 (via Tile Unit)
Intel 4th Gen / AMD Ryzen Zen2 & Newer Yes No No No No
Intel Skylake-SP / Skylake-X / AMD Zen 4 & Newer Yes Yes No No No
Intel Cooper Lake (3rd Gen Xeon) / AMD Zen 4 (EPYC) / Ryzen Zen5 & Newer Yes Yes Yes Yes No
Intel Sapphire Rapids (4th Gen Xeon) & Newer Yes Yes Yes Yes Yes

***Currently no AMD CPU support AMXBF16. AMD expected to include AMXBF16 support from AMD Zen 7 CPUs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm_cpu_avx512vnni-0.11.1-cp313-cp313-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.11.1-cp312-cp312-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.11.1-cp311-cp311-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.11.1-cp310-cp310-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

File details

Details for the file vllm_cpu_avx512vnni-0.11.1-cp313-cp313-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.11.1-cp313-cp313-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 66878163c4b046b7b0d87ba102032245bc6d7a945ba671cbfd0b85be89ad52aa
MD5 1aaefc5d50c7418be5d61550c1d0c3e4
BLAKE2b-256 13ef174fb7deda3707c8674d3b30b260e8acb5762b3bb1b76a8d43db12e2b078

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.11.1-cp312-cp312-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.11.1-cp312-cp312-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 62dc6fd7d44b1fee6f64ee631ed396d03d2d162a53b0fdeb8477ca176d66a7f2
MD5 7b54dd1f33cc1ede31f6718ef35b3102
BLAKE2b-256 a14548540b449b38150167568297574cf647c3073e5ab4e21d64aa4e73d7fc26

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.11.1-cp311-cp311-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.11.1-cp311-cp311-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 1cc4cee255d80e15b39e60227802f475f4c81135694d5483c8f070aad4f6204c
MD5 b8045cda382072c66bdd68a78d6f6b8b
BLAKE2b-256 fdd7d841901e3b22786ec9d26e07a19e59e6c707244ec67b6be7411df3733a74

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.11.1-cp310-cp310-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.11.1-cp310-cp310-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 7c977034696b8d3497059eeefc2b824c84f2425f19eb1c8c8e8c84496631c33a
MD5 6a3f950f769691aa69a6e34fe3c6462a
BLAKE2b-256 46d3c5353b3d4cffa94179fa8f97de8052827b39d1fa26d5952e68c964f350b4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page