vLLM CPU inference engine (without AVX512 support, Runs on both x86_64 and Arm64 Processors)
Project description
Easy, fast, and cheap LLM serving for everyone
Buy Me a Coffee
Your support encourages me to keep creating/supporting my open-source projects. If you found value in this project, you can buy me a coffee to keep me up all the sleepless nights.
About
vLLM is a fast and easy-to-use library for LLM inference and serving. This PyPl package has NO support for AVX512/VNNI/AVX512BF16/AMXBF16. CPU inference will not have any instruction set acceleration. Only RAW CPU power will be used. This package should be used for inference on ARM64 CPUs
Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
vLLM is fast with:
- State-of-the-art serving throughput
- Efficient management of attention key and value memory with PagedAttention
- Continuous batching of incoming requests
- Fast model execution with VNNI on supported CPUs Use this package ONLY IF your CPU doesn't have avx512 instruction set
- Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8
- Optimized CPU kernels, including integration with FlashAttention and FlashInfer
- Speculative decoding
- Chunked prefill
vLLM is flexible and easy to use with:
- Seamless integration with popular Hugging Face models
- High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
- Tensor, pipeline, data and expert parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support for x86_64, PowerPC CPUs, Arm CPUs and Applie Scilicon (CPU inference). This package does not support any GPU inference. For GPU inference support use the official vLLM PypI
- Prefix caching support
- Multi-LoRA support
vLLM seamlessly supports most popular open-source models on HuggingFace, including:
- Transformer-like LLMs (e.g., Llama)
- Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
- Embedding Models (e.g., E5-Mistral)
- Multi-modal LLMs (e.g., LLaVA)
Find the full list of supported models here.
Important Notes
- Install this package on Linux envirenment only. For Windows you will have to use WSL2 or later
- This package has a Container.io (Docker/Podman etc.) compatible image in Docker Hub
- Apache Licence of main vLLM project
- GPL License of this CPU specific vLLM package
- For versions 0.8.5–0.12.0, use
.post2releases (e.g.,pip install vllm-cpu==0.12.0.post2) — includes critical CPU platform detection fix
Platform Detection Fix (versions 0.8.5 - 0.12.0)
If you encounter RuntimeError: Failed to infer device type or see UnspecifiedPlatform warnings with versions 0.8.5 to 0.12.0, run this one-time fix after installation:
import os, sys, importlib.metadata as m
v = next((d.metadata['Version'] for d in m.distributions() if d.metadata['Name'].startswith('vllm-cpu')), None)
if v:
p = next((p for p in sys.path if 'site-packages' in p and os.path.isdir(p)), None)
if p:
d = os.path.join(p, 'vllm-0.0.0.dist-info'); os.makedirs(d, exist_ok=True)
open(os.path.join(d, 'METADATA'), 'w').write(f'Metadata-Version: 2.1\nName: vllm\nVersion: {v}+cpu\n')
print(f'Fixed: vllm version set to {v}+cpu')
This creates a package alias so vLLM detects the CPU platform correctly. Only needed once per environment. Versions 0.8.5.post2+ and 0.12.0+ include this fix automatically.
Getting Started
Install vLLM with a single command:
pip install vllm-cpu --index-url https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple
This installs vllm-cpu with CPU-optimized PyTorch (no CUDA dependencies).
Alternative: Using uv (faster)
uv pip install vllm-cpu --index-url https://download.pytorch.org/whl/cpu --extra-index-url https://pypi.org/simple
Install uv on Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Docker Images
Pre-built Docker images are available on Docker Hub and GitHub Container Registry.
# Pull from Docker Hub
docker pull mekayelanik/vllm-cpu:noavx512-latest
# Or from GitHub Container Registry
docker pull ghcr.io/mekayelanik/vllm-cpu:noavx512-latest
# Run OpenAI-compatible API server
docker run -p 8000:8000 \
-v $HOME/.cache/huggingface:/root/.cache/huggingface \
mekayelanik/vllm-cpu:noavx512-latest \
--model facebook/opt-125m
Available tags: noavx512-latest, noavx512-<version> (e.g., noavx512-0.12.0)
Platforms: linux/amd64, linux/arm64
vllm-cpu
This CPU specific vLLM has 5 optimized wheel packages from the upstream vLLM source code:
| Package | Optimizations | Target CPUs |
|---|---|---|
vllm-cpu |
Baseline (no AVX512) | All x86_64 and ARM64 CPUs |
vllm-cpu-avx512 |
AVX512 | Intel Skylake-X and newer |
vllm-cpu-avx512vnni |
AVX512 + VNNI | Intel Cascade Lake and newer |
vllm-cpu-avx512bf16 |
AVX512 + VNNI + BF16 | Intel Cooper Lake and newer |
vllm-cpu-amxbf16 |
AVX512 + VNNI + BF16 + AMX | Intel Sapphire Rapids (4th gen Xeon+) |
Each package is compiled with specific CPU instruction set flags for optimal inference performance.
Check Your CPU & Get Install Command
pkg=vllm-cpu
grep -q avx512f /proc/cpuinfo && pkg=vllm-cpu-avx512
grep -q avx512_vnni /proc/cpuinfo && pkg=vllm-cpu-avx512vnni
grep -q avx512_bf16 /proc/cpuinfo && pkg=vllm-cpu-avx512bf16
grep -q amx_bf16 /proc/cpuinfo && pkg=vllm-cpu-amxbf16
printf "\n\tRUN:\n\t\tuv pip install $pkg\n"
Example list of CPUs with their supported instruction sets
| CPU Architecture (Intel/AMD) | AVX2 | AVX-512 F (Base) | VNNI (INT8) | BF16 (BFloat16) (via AVX-512) | AMX-BF16 (via Tile Unit) |
|---|---|---|---|---|---|
| Intel 4th Gen / AMD Ryzen Zen2 & Newer | Yes | No | No | No | No |
| Intel Skylake-SP / Skylake-X / AMD Zen 4 & Newer | Yes | Yes | No | No | No |
| Intel Cooper Lake (3rd Gen Xeon) / AMD Zen 4 (EPYC) / Ryzen Zen5 & Newer | Yes | Yes | Yes | Yes | No |
| Intel Sapphire Rapids (4th Gen Xeon) & Newer | Yes | Yes | Yes | Yes | Yes |
***Currently no AMD CPU support AMXBF16. AMD expected to include AMXBF16 support from AMD Zen 7 CPUs
Buy Me a Coffee
Your support encourages me to keep creating/supporting my open-source projects. If you found value in this project, you can buy me a coffee to keep me up all the sleepless nights.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vllm_cpu-0.15.0-cp313-cp313-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp313-cp313-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 17.3 MB
- Tags: CPython 3.13, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
564a808c4206f7c39e7aed1b608970812fa19879c201e7fbe42886ab80037cf1
|
|
| MD5 |
47b10a77b9fd4a6b5ba87f0f953c5dce
|
|
| BLAKE2b-256 |
6fabec86ac3ae63499ab73c69ddb3dd04200f23144cb08e1dbcea902934d3f0c
|
File details
Details for the file vllm_cpu-0.15.0-cp312-cp312-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp312-cp312-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 17.3 MB
- Tags: CPython 3.12, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b78bdece9efa2a494dc5b27290b875cc2fd0df2dce49ba6ab77914045e38004d
|
|
| MD5 |
5f19b5b3fbe6c827a86a4609e15c1a55
|
|
| BLAKE2b-256 |
3976a23bfb48eb7b57bc752dc7c0748323b9bd69426f495319694335fe57fc37
|
File details
Details for the file vllm_cpu-0.15.0-cp312-cp312-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp312-cp312-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 31.8 MB
- Tags: CPython 3.12, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
93a94ddf61eb0621876330c266d80912bff68eab0199793fcc5015bc7c22aa19
|
|
| MD5 |
62c1fe41fbeeb4ff72d3d4273514a8c4
|
|
| BLAKE2b-256 |
29a002869f4d421342375c608ddf6967356f30fa08ba0e2150bedbb80bcd480e
|
File details
Details for the file vllm_cpu-0.15.0-cp311-cp311-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp311-cp311-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 17.3 MB
- Tags: CPython 3.11, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
310ef997acedcbe25ed3405e924098f0a8f846793be7b76b4ca94fe1c4f51d2e
|
|
| MD5 |
26807b0274cdabba9fc86166d9a1cef8
|
|
| BLAKE2b-256 |
796f9375af9498e2bf0bf2abc8eedacd5638f3a7b9f33bcd2845ac768b94af24
|
File details
Details for the file vllm_cpu-0.15.0-cp311-cp311-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp311-cp311-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 31.8 MB
- Tags: CPython 3.11, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7e309b8bfbabe8c083fecc5360ee68b22f3cfc1cd33cb393d83b0a372a427c83
|
|
| MD5 |
ba7166e220c8a13a3d0aaf2b04930c30
|
|
| BLAKE2b-256 |
ce7a9aab228f019b1f6766285d3ef828fd06a9e82a771cee5babbefc32c516e6
|
File details
Details for the file vllm_cpu-0.15.0-cp310-cp310-manylinux_2_28_x86_64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp310-cp310-manylinux_2_28_x86_64.whl
- Upload date:
- Size: 17.3 MB
- Tags: CPython 3.10, manylinux: glibc 2.28+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
32362200d026c4c1cfc7fbe41a8444ee1f0deab66e838aa717f751be4b11ee63
|
|
| MD5 |
728de46eddc851cc972655ccb29fef29
|
|
| BLAKE2b-256 |
265a5d6538c6d59a83a9412ad54810b7f48f3ab1c41c4863e23758ece68c08f5
|
File details
Details for the file vllm_cpu-0.15.0-cp310-cp310-manylinux_2_28_aarch64.whl.
File metadata
- Download URL: vllm_cpu-0.15.0-cp310-cp310-manylinux_2_28_aarch64.whl
- Upload date:
- Size: 31.8 MB
- Tags: CPython 3.10, manylinux: glibc 2.28+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cc95e54eeb4cfef6994abca7d5c30d0813501f94e84eeb5d6c7db5e88ac6b0b3
|
|
| MD5 |
1173cebd69a4f1bb7f436e7cf57e04d3
|
|
| BLAKE2b-256 |
869fa343bad27d2a775a81ec35cba51850e49fcb0e4a06f13dc6753fab278a7f
|