Skip to main content

vLLM CPU inference engine (AVX512 + VNNI + BF16 optimized)

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

About

vLLM is a fast and easy-to-use library for LLM inference and serving. This PyPl package has only supports AVX512+VNNI+AVX512BF16. No support for AMXBF16 is available in this package. CPU inference will have the above available instruction set accelerations.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with AVX512+VNNI+AVX512BF16 on supported CPUs Use this package ONLY IF your CPU have avx512bf16 or newer instruction sets.
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CPU kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for x86_64, PowerPC CPUs, Arm CPUs and Applie Scilicon (CPU inference). This package does not support any GPU inference. For GPU inference support use the official vLLM PypI
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Importnt Notes

Getting Started

Install vLLM with pip or uv

mkdir -p /path/to/vllm
cd /path/to/vllm
uv venv
uv pip install torch==2.8.0 torchvision --index-url https://download.pytorch.org/whl/cpu
uv pip install vllm-cpu-avx512bf16

Install uv on Linux environment using CURL:

curl -LsSf https://astral.sh/uv/install.sh | sh

or using WGET

wget -qO- https://astral.sh/uv/install.sh | sh

if you wanna install a specific version of uv

curl -LsSf https://astral.sh/uv/0.9.11/install.sh | sh

vllm-cpu

This CPU specific vLLM has 5 optimized wheel packages from the upstream vLLM source code:

Package Optimizations Target CPUs
vllm-cpu Baseline (no AVX512) All x86_64 and ARM64 CPUs
vllm-cpu-avx512 AVX512 Intel Skylake-X and newer
vllm-cpu-avx512vnni AVX512 + VNNI Intel Cascade Lake and newer
vllm-cpu-avx512bf16 AVX512 + VNNI + BF16 Intel Cooper Lake and newer
vllm-cpu-amxbf16 AVX512 + VNNI + BF16 + AMX Intel Sapphire Rapids (4th gen Xeon+)

Each package is compiled with specific CPU instruction set flags for optimal inference performance.

Check available CPU instruction sets

lscpu | grep -i flags

Example list of CPUs with their supported instruction sets

CPU Architecture (Intel/AMD) AVX2 AVX-512 F (Base) VNNI (INT8) BF16 (BFloat16) (via AVX-512) AMX-BF16 (via Tile Unit)
Intel 4th Gen / AMD Ryzen Zen2 & Newer Yes No No No No
Intel Skylake-SP / Skylake-X / AMD Zen 4 & Newer Yes Yes No No No
Intel Cooper Lake (3rd Gen Xeon) / AMD Zen 4 (EPYC) / Ryzen Zen5 & Newer Yes Yes Yes Yes No
Intel Sapphire Rapids (4th Gen Xeon) & Newer Yes Yes Yes Yes Yes

***Currently no AMD CPU support AMXBF16. AMD expected to include AMXBF16 support from AMD Zen 7 CPUs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm_cpu_avx512bf16-0.11.1-cp313-cp313-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512bf16-0.11.1-cp312-cp312-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512bf16-0.11.1-cp311-cp311-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512bf16-0.11.1-cp310-cp310-manylinux_2_17_x86_64.whl (15.3 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

File details

Details for the file vllm_cpu_avx512bf16-0.11.1-cp313-cp313-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.11.1-cp313-cp313-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 e35c276bb70b2119b0701b8e87b46673e8fd7402c89a8c6d257053a3241e1f81
MD5 7b2ff38cd1d8012f580746d8b7b4049c
BLAKE2b-256 a7e1182ece67d98054c276d5ae52e62966428f2044ae6d8e1b54c0052da8ba13

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.11.1-cp312-cp312-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.11.1-cp312-cp312-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 d2ca141d523a350eb3e86c0eb7bc2419dc6ebf855246d58c43d86bde15377188
MD5 83e73bdb5ed7300dd8a90b1275ca63cd
BLAKE2b-256 27937f65ad7cd919f9827f5e0e4d8a446daceaa28d788892e489d3e4d0827b51

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.11.1-cp311-cp311-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.11.1-cp311-cp311-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 78632877946fb4f60d77339aaa9883e04a9575319e91f084b82ab49f03a86158
MD5 6ce25002054f5772e863fb2aee5c6755
BLAKE2b-256 0eddb6c9ecb27fce0b2475ef88c2cd84fea1a7e414a986bf787f6f001e59e8ad

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.11.1-cp310-cp310-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.11.1-cp310-cp310-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 fcf20c686c87496b4ccd9b8822899dbf501dd6d8f14b430a6df1f7e59339fffc
MD5 1188d9995a3399dda1159bc217444c35
BLAKE2b-256 9fecf7ac5100d5923a9a3f3885a8cabcadb86c42a9868fbee5abb03753a4dc31

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page