Skip to main content

vLLM CPU inference engine (AVX512 + VNNI + BF16 optimized)

Project description

vLLM

Easy, fast, and cheap LLM serving for everyone

GitHub Stars GitHub Forks GitHub Issues GitHub PRs

PyPI Version PyPI Downloads License

Docker Pulls Docker Stars Docker Version Docker Image Size

Last Commit Contributors Repo Size


Buy Me a Coffee

Your support encourages me to keep creating/supporting my open-source projects. If you found value in this project, you can buy me a coffee to keep me up all the sleepless nights.

Buy Me A Coffee

About

vLLM is a fast and easy-to-use library for LLM inference and serving. This PyPl package has only supports AVX512+VNNI+AVX512BF16. No support for AMXBF16 is available in this package. CPU inference will have the above available instruction set accelerations.

Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

vLLM is fast with:

  • State-of-the-art serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Continuous batching of incoming requests
  • Fast model execution with AVX512+VNNI+AVX512BF16 on supported CPUs Use this package ONLY IF your CPU have avx512bf16 or newer instruction sets.
  • Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
  • Optimized CPU kernels, including integration with FlashAttention and FlashInfer
  • Speculative decoding
  • Chunked prefill

vLLM is flexible and easy to use with:

  • Seamless integration with popular Hugging Face models
  • High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server
  • Support for x86_64, PowerPC CPUs, Arm CPUs and Applie Scilicon (CPU inference). This package does not support any GPU inference. For GPU inference support use the official vLLM PypI
  • Prefix caching support
  • Multi-LoRA support

vLLM seamlessly supports most popular open-source models on HuggingFace, including:

  • Transformer-like LLMs (e.g., Llama)
  • Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
  • Embedding Models (e.g., E5-Mistral)
  • Multi-modal LLMs (e.g., LLaVA)

Find the full list of supported models here.

Important Notes

Platform Detection Fix (versions 0.8.5 - 0.12.0)

If you encounter RuntimeError: Failed to infer device type or see UnspecifiedPlatform warnings with versions 0.8.5 to 0.12.0, run this one-time fix after installation:

import os, sys, importlib.metadata as m
v = next((d.metadata['Version'] for d in m.distributions() if d.metadata['Name'].startswith('vllm-cpu')), None)
if v:
    p = next((p for p in sys.path if 'site-packages' in p and os.path.isdir(p)), None)
    if p:
        d = os.path.join(p, 'vllm-0.0.0.dist-info'); os.makedirs(d, exist_ok=True)
        open(os.path.join(d, 'METADATA'), 'w').write(f'Metadata-Version: 2.1\nName: vllm\nVersion: {v}+cpu\n')
        print(f'Fixed: vllm version set to {v}+cpu')

This creates a package alias so vLLM detects the CPU platform correctly. Only needed once per environment. Versions 0.8.5.post2+ and 0.12.0+ include this fix automatically.

Getting Started

Install vLLM with a single command:

pip install vllm-cpu-avx512bf16 --index-url https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple

This installs vllm-cpu-avx512bf16 with CPU-optimized PyTorch (no CUDA dependencies).

Alternative: Using uv (faster)

uv pip install vllm-cpu-avx512bf16 --index-url https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple

Install uv on Linux:

curl -LsSf https://astral.sh/uv/install.sh | sh

Docker Images

Pre-built Docker images are available on Docker Hub and GitHub Container Registry.

# Pull from Docker Hub
docker pull mekayelanik/vllm-cpu:avx512bf16-latest

# Or from GitHub Container Registry
docker pull ghcr.io/mekayelanik/vllm-cpu:avx512bf16-latest

# Run OpenAI-compatible API server
docker run -p 8000:8000 \
  -v $HOME/.cache/huggingface:/root/.cache/huggingface \
  mekayelanik/vllm-cpu:avx512bf16-latest \
  --model facebook/opt-125m

Available tags: avx512bf16-latest, avx512bf16-<version> (e.g., avx512bf16-0.12.0)

Platforms: linux/amd64

vllm-cpu

This CPU specific vLLM has 5 optimized wheel packages from the upstream vLLM source code:

Package Optimizations Target CPUs
vllm-cpu Baseline (no AVX512) All x86_64 and ARM64 CPUs
vllm-cpu-avx512 AVX512 Intel Skylake-X and newer
vllm-cpu-avx512vnni AVX512 + VNNI Intel Cascade Lake and newer
vllm-cpu-avx512bf16 AVX512 + VNNI + BF16 Intel Cooper Lake and newer
vllm-cpu-amxbf16 AVX512 + VNNI + BF16 + AMX Intel Sapphire Rapids (4th gen Xeon+)

Each package is compiled with specific CPU instruction set flags for optimal inference performance.

Check Your CPU & Get Install Command

pkg=vllm-cpu
grep -q avx512f /proc/cpuinfo && pkg=vllm-cpu-avx512
grep -q avx512_vnni /proc/cpuinfo && pkg=vllm-cpu-avx512vnni
grep -q avx512_bf16 /proc/cpuinfo && pkg=vllm-cpu-avx512bf16
grep -q amx_bf16 /proc/cpuinfo && pkg=vllm-cpu-amxbf16
printf "\n\tRUN:\n\t\tuv pip install $pkg\n"

Example list of CPUs with their supported instruction sets

CPU Architecture (Intel/AMD) AVX2 AVX-512 F (Base) VNNI (INT8) BF16 (BFloat16) (via AVX-512) AMX-BF16 (via Tile Unit)
Intel 4th Gen / AMD Ryzen Zen2 & Newer Yes No No No No
Intel Skylake-SP / Skylake-X / AMD Zen 4 & Newer Yes Yes No No No
Intel Cooper Lake (3rd Gen Xeon) / AMD Zen 4 (EPYC) / Ryzen Zen5 & Newer Yes Yes Yes Yes No
Intel Sapphire Rapids (4th Gen Xeon) & Newer Yes Yes Yes Yes Yes

***Currently no AMD CPU support AMXBF16. AMD expected to include AMXBF16 support from AMD Zen 7 CPUs


Buy Me a Coffee

Your support encourages me to keep creating/supporting my open-source projects. If you found value in this project, you can buy me a coffee to keep me up all the sleepless nights.

Buy Me A Coffee

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

vllm_cpu_avx512bf16-0.14.0-cp313-cp313-manylinux_2_28_x86_64.whl (34.9 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.28+ x86-64

vllm_cpu_avx512bf16-0.14.0-cp312-cp312-manylinux_2_28_x86_64.whl (34.9 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.28+ x86-64

vllm_cpu_avx512bf16-0.14.0-cp311-cp311-manylinux_2_28_x86_64.whl (34.9 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.28+ x86-64

vllm_cpu_avx512bf16-0.14.0-cp310-cp310-manylinux_2_28_x86_64.whl (34.9 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.28+ x86-64

vllm_cpu_avx512bf16-0.14.0-cp38-abi3-manylinux_2_28_x86_64.whl (34.9 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ x86-64

File details

Details for the file vllm_cpu_avx512bf16-0.14.0-cp313-cp313-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.14.0-cp313-cp313-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 69f96497408dd086fec7c40a028fc6765d2ca5357351d1166cf6317b55133d79
MD5 c5b874c8c885e333e8009d95a79b4856
BLAKE2b-256 95cd9318958d2893f6ccd137d5b3e74577af1203bb7aa249751e2620de18065c

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.14.0-cp312-cp312-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.14.0-cp312-cp312-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 48640cd7bd1abd665a6d2ec699bfc046a5ac94faa689c26437d84626bfcc8cf2
MD5 2f5143046ec97862331fc24114c8aff7
BLAKE2b-256 6f3494b38994e09e393eddc860907816cc80a0c5ed6fcb083d6298a37b9b8bcb

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.14.0-cp311-cp311-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.14.0-cp311-cp311-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 6ec094a61d8ad5f20a3c51848aa21ea3617585752f0998cda8866c0a9fcb083f
MD5 2a8eb54f856d29c3cc78bd2768714eee
BLAKE2b-256 8d1d8e61a1c8e6248c718397522abb18ac9635b6f1464faa89f354bcb6a2b4c5

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.14.0-cp310-cp310-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.14.0-cp310-cp310-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 f0e368f470e5fb86524135697445804a6a2042cc9e5639788c8af244588f1d86
MD5 635a22584bbd4bac3e0ed82cb7826cb2
BLAKE2b-256 e25333ce63d0bd4a354af41021acee961d276b40ecc3f5c8972c7d288a80f084

See more details on using hashes here.

File details

Details for the file vllm_cpu_avx512bf16-0.14.0-cp38-abi3-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for vllm_cpu_avx512bf16-0.14.0-cp38-abi3-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 1a2a6e914a8daaaebe861912723b882ec10f70eb3dc993786cc3be9a4d96f468
MD5 3f3bd633c828cf029ae69a937133cdd1
BLAKE2b-256 7504cec09c7d1203ebdbdd47a2702be61492e9974e3fa3a6af26cf7087f9833e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page