Serving LLMs at Scale
Project description
Breathing Life into Language
Aphrodite is an inference engine that optimizes the serving of HuggingFace-compatible models at scale. Built on vLLM's Paged Attention technology, it delivers high-performance model inference for multiple concurrent users. Aphrodite serves as the backend engine powering PygmalionAI's chat platforms and API infrastructure.
Aphrodite builds upon and integrates the exceptional work from various projects, primarily vLLM.
Features
- Continuous Batching
- Efficient K/V management with PagedAttention from vLLM
- Optimized CUDA kernels for improved inference
- Quantization support via AQLM, AutoRound, AWQ, BitNet, Bitsandbytes, ExLlamaV3, GGUF, GPTQ, QuIP#, SqueezeLLM, Marlin, [2] [3], NVIDIA ModelOpt, TorchAO, VPTQ, compressed_tensors, MXFP4, and more.
- Distributed inference
- Quantized KV cache using scaled and scale-less FP8, and TurboQuant
- Support for modern samplers such as DRY, XTC, Mirostat, and more
- Disaggregated inference
- Speculative decoding, including EAGLE, DFlash, ngram, MTP, and more
- Multimodal support
- Multi-LoRA support
Quickstart
Install the engine:
pip install -U aphrodite-engine
Then launch a model:
aphrodite run Qwen/Qwen3.5-0.8B
This will create a OpenAI-compatible API server that can be accessed at port 2242 of the localhost. You can plug in the API into a UI that supports OpenAI, such as SillyTavern.
Requirements
- Operating System: Linux, Windows (WSL2)
- Python: 3.10 to 3.13 (build from source for 3.14)
Build Requirements
- CUDA >= 12
Notes
-
By design, Aphrodite takes up 92% of your GPU's VRAM. If you're not serving an LLM at scale, you may want to limit the amount of memory it takes up. You can do this in the API example by launching the server with the
--gpu-memory-utilization 0.6(0.6 means 60%). -
You can view the full list of commands by running
aphrodite run --help.
Acknowledgements
Aphrodite Engine would have not been possible without the phenomenal work of other open-source projects. A (non-exhaustive) list:
- vLLM
- TensorRT-LLM
- xFormers
- Flash Attention
- llama.cpp
- AutoAWQ
- AutoGPTQ
- SqueezeLLM
- Exllamav2
- TabbyAPI
- AQLM
- KoboldAI
- Text Generation WebUI
- Megatron-LM
- Ray
Sponsors
Past and present, in alphabetical order:
Contributing
Everyone is welcome to contribute. You can support the project by opening Pull Requests for new features, fixes, or general UX improvements.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aphrodite_engine-0.21.0-cp38-abi3-manylinux1_x86_64.whl.
File metadata
- Download URL: aphrodite_engine-0.21.0-cp38-abi3-manylinux1_x86_64.whl
- Upload date:
- Size: 246.8 MB
- Tags: CPython 3.8+
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.22
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
03e76135c9c6146fce08884cf84fcdeb03ec7918e99f4a6c188ad9f2bef0c056
|
|
| MD5 |
b224b5d0c3b540ae91ebc173b7eb77dd
|
|
| BLAKE2b-256 |
298937c203ab83852d8380542811e0b8da069b176091205f1662fe4df201f129
|