Skip to main content

vLLM CPU inference engine (AVX512 + VNNI optimized)

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

GitHub Stars GitHub Forks GitHub Issues GitHub PRs

PyPI Version PyPI Downloads License

Docker Pulls Docker Stars Docker Version Docker Image Size

Last Commit Contributors Repo Size


Buy Me a Coffee

Your support encourages me to keep creating/supporting my open-source projects. If you found value in this project, you can buy me a coffee to keep me up all the sleepless nights.

Buy Me A Coffee

About

vLLM is a fast and easy-to-use library for LLM inference and serving. This PyPl package has VNNI (AVX512+VNNI) inference built in on supported CPUs.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with VNNI on supported CPUs Use this package ONLY IF your CPU have avx512vnni or newer instruction sets
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CPU kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for x86_64, PowerPC CPUs, Arm CPUs and Applie Scilicon (CPU inference). This package does not support any GPU inference. For GPU inference support use the official vLLM PypI
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Important Notes

Platform Detection Fix (versions 0.8.5 - 0.12.0)

If you encounter RuntimeError: Failed to infer device type or see UnspecifiedPlatform warnings with versions 0.8.5 to 0.12.0, run this one-time fix after installation:

import os, sys, importlib.metadata as m
v = next((d.metadata['Version'] for d in m.distributions() if d.metadata['Name'].startswith('vllm-cpu')), None)
if v:
    p = next((p for p in sys.path if 'site-packages' in p and os.path.isdir(p)), None)
    if p:
        d = os.path.join(p, 'vllm-0.0.0.dist-info'); os.makedirs(d, exist_ok=True)
        open(os.path.join(d, 'METADATA'), 'w').write(f'Metadata-Version: 2.1\nName: vllm\nVersion: {v}+cpu\n')
        print(f'Fixed: vllm version set to {v}+cpu')

This creates a package alias so vLLM detects the CPU platform correctly. Only needed once per environment. Versions 0.8.5.post2+ and 0.12.0+ include this fix automatically.

Getting Started

Install vLLM with a single command:

pip install vllm-cpu-avx512vnni --index-url https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple

This installs vllm-cpu-avx512vnni with CPU-optimized PyTorch (no CUDA dependencies).

Alternative: Using uv (faster)

uv pip install vllm-cpu-avx512vnni --index-url https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple

Install uv on Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh

vllm-cpu

This CPU specific vLLM has 5 optimized wheel packages from the upstream vLLM source code:

Package Optimizations Target CPUs
vllm-cpu Baseline (no AVX512) All x86_64 and ARM64 CPUs
vllm-cpu-avx512 AVX512 Intel Skylake-X and newer
vllm-cpu-avx512vnni AVX512 + VNNI Intel Cascade Lake and newer
vllm-cpu-avx512bf16 AVX512 + VNNI + BF16 Intel Cooper Lake and newer
vllm-cpu-amxbf16 AVX512 + VNNI + BF16 + AMX Intel Sapphire Rapids (4th gen Xeon+)

Each package is compiled with specific CPU instruction set flags for optimal inference performance.

Check available CPU instruction sets

lscpu | grep -i flags

Example list of CPUs with their supported instruction sets

CPU Architecture (Intel/AMD) AVX2 AVX-512 F (Base) VNNI (INT8) BF16 (BFloat16) (via AVX-512) AMX-BF16 (via Tile Unit)
Intel 4th Gen / AMD Ryzen Zen2 & Newer Yes No No No No
Intel Skylake-SP / Skylake-X / AMD Zen 4 & Newer Yes Yes No No No
Intel Cooper Lake (3rd Gen Xeon) / AMD Zen 4 (EPYC) / Ryzen Zen5 & Newer Yes Yes Yes Yes No
Intel Sapphire Rapids (4th Gen Xeon) & Newer Yes Yes Yes Yes Yes

***Currently no AMD CPU support AMXBF16. AMD expected to include AMXBF16 support from AMD Zen 7 CPUs


Buy Me a Coffee

Your support encourages me to keep creating/supporting my open-source projects. If you found value in this project, you can buy me a coffee to keep me up all the sleepless nights.

Buy Me A Coffee

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm_cpu_avx512vnni-0.10.2.post2-cp313-cp313-manylinux_2_17_x86_64.whl (10.6 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.10.2.post2-cp312-cp312-manylinux_2_17_x86_64.whl (10.6 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.10.2.post2-cp311-cp311-manylinux_2_17_x86_64.whl (10.6 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.10.2.post2-cp310-cp310-manylinux_2_17_x86_64.whl (10.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

vllm_cpu_avx512vnni-0.10.2.post2-cp39-cp39-manylinux_2_17_x86_64.whl (10.6 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

File details

Details for the file vllm_cpu_avx512vnni-0.10.2.post2-cp313-cp313-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.10.2.post2-cp313-cp313-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 74b8d273a36b2c926bf0091c88539fab19f41e9557f0cc51e69a9e04681e6288
MD5 34f0137a5c53bda686318c18bbd64dc9
BLAKE2b-256 0a7e2d887abc1358722a349cacaa147424359f43bc4d83881e76983d478cff2c

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.10.2.post2-cp312-cp312-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.10.2.post2-cp312-cp312-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 7ad85232d1173bfc84297f4f412299311278d4e94ea0bbaac25a8f56f35b6c5a
MD5 e1730109926d39f77db45d36ece37eaf
BLAKE2b-256 b7ece0e12f3f60a227386b84c29f1c15924f1ebf0f4aa2315d9a7264beb9f6ae

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.10.2.post2-cp311-cp311-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.10.2.post2-cp311-cp311-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 eb7ce23d3cfbc2750506f2249f585374a13286862ecf89bcdb27f769465783c5
MD5 ca6de2b243f97d7219b8ff573c7ccfa0
BLAKE2b-256 4267446ab13c2f377b6678e3b5321dd7ba22bd4b9169b92c0a766447d6534e28

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.10.2.post2-cp310-cp310-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.10.2.post2-cp310-cp310-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 96c610d91db192e52cf58981d2bd7af054262d5d00671b368f714fcaf8168738
MD5 6a1d9196687405a85bcade0b7c84d33f
BLAKE2b-256 b4e3b648d1b757d0ed76d7bc83e7fd80c8e8be3c3af13fac1f2bcb54f5834977

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512vnni-0.10.2.post2-cp39-cp39-manylinux_2_17_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512vnni-0.10.2.post2-cp39-cp39-manylinux_2_17_x86_64.whl
Algorithm Hash digest
SHA256 f02a5c3267ce61ff7d0bc435c36310f8d762cfd8ddef8643b5080575c02f6877
MD5 5239819e262ac61b25690ccbf2f4b038
BLAKE2b-256 1e6a2e07bb64baf3f6b3eed46a5bf0c3075cabf22e88a65eb01f678c7783bacb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page