Skip to main content

A framework for efficient model inference with omni-modality models

Project description

vllm-omni

Easy, fast, and cheap omni-modality model serving for everyone

| Documentation | User Forum | Developer Slack | WeChat | Paper | Slides |


Latest News 🔥

  • [2026/03] Check out our first public project deepdive at the vLLM Hong Kong Meetup!
  • [2026/03] vllm-omni-skills is a community-driven collection of AI assistant skills that help developers work with vLLM-Omni more effectively. These skills can be used with popular agentic AI coding assistants like Cursor IDE, Claude, Codex, and more.
  • [2026/02] We released 0.16.0 - A major alignment + capability release that rebases onto upstream vLLM v0.16.0 and significantly expands performance, distributed execution, and production readiness across Qwen3-Omni / Qwen3-TTS, Bagel, MiMo-Audio, GLM-Image and the Diffusion (DiT) image/video stack—while also improving platform coverage (CUDA / ROCm / NPU / XPU), CI quality, and documentation.
  • [2026/02] We released 0.14.0 - This is the first stable release of vLLM-Omni that expands Omni’s diffusion / image-video generation and audio / TTS stack, improves distributed execution and memory efficiency, and broadens platform/backend coverage (GPU/ROCm/NPU/XPU). It also brings meaningful upgrades to serving APIs, profiling & benchmarking, and overall stability. Please check our latest paper for architecture design and performance results.
  • [2026/01] We released 0.12.0rc1 - a major RC milestone focused on maturing the diffusion stack, strengthening OpenAI-compatible serving, expanding omni-model coverage, and improving stability across platforms (GPU/NPU/ROCm).
  • [2025/11] vLLM community officially released vllm-project/vllm-omni in order to support omni-modality models serving.

About

vLLM was originally designed to support large language models for text-based autoregressive generation tasks. vLLM-Omni is a framework that extends its support for omni-modality model inference and serving:

  • Omni-modality: Text, image, video, and audio data processing
  • Non-autoregressive Architectures: extend the AR support of vLLM to Diffusion Transformers (DiT) and other parallel generation models
  • Heterogeneous outputs: from traditional text generation to multimodal outputs

vllm-omni

vLLM-Omni is fast with:

  • State-of-the-art AR support by leveraging efficient KV cache management from vLLM
  • Pipelined stage execution overlapping for high throughput performance
  • Fully disaggregation based on OmniConnector and dynamic resource allocation across stages

vLLM-Omni is flexible and easy to use with:

  • Heterogeneous pipeline abstraction to manage complex model workflows
  • Seamless integration with popular Hugging Face models
  • Tensor, pipeline, data and expert parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server

vLLM-Omni seamlessly supports most popular open-source models on HuggingFace, including:

  • Omni-modality models (e.g. Qwen-Omni)
  • Multi-modality generation models (e.g. Qwen-Image)

Getting Started

Visit our documentation to learn more.

Contributing

We welcome and value any contributions and collaborations. Please check out Contributing to vLLM-Omni for how to get involved.

Citation

If you use vLLM-Omni for your research, please cite our paper:

@article{yin2026vllmomni,
  title={vLLM-Omni: Fully Disaggregated Serving for Any-to-Any Multimodal Models},
  author={Peiqi Yin, Jiangyun Zhu, Han Gao, Chenguang Zheng, Yongxiang Huang, Taichang Zhou, Ruirui Yang, Weizhi Liu, Weiqing Chen, Canlin Guo, Didan Deng, Zifeng Mo, Cong Wang, James Cheng, Roger Wang, Hongsheng Liu},
  journal={arXiv preprint arXiv:2602.02204},
  year={2026}
}

Join the Community

Feel free to ask questions, provide feedbacks and discuss with fellow users of vLLM-Omni in #sig-omni slack channel at slack.vllm.ai or vLLM user forum at discuss.vllm.ai.

Star History

Star History Chart

License

Apache License 2.0, as found in the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vllm_omni-0.18.0.tar.gz (8.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vllm_omni-0.18.0-py3-none-any.whl (1.6 MB view details)

Uploaded Python 3

File details

Details for the file vllm_omni-0.18.0.tar.gz.

File metadata

  • Download URL: vllm_omni-0.18.0.tar.gz
  • Upload date:
  • Size: 8.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for vllm_omni-0.18.0.tar.gz
Algorithm Hash digest
SHA256 3fd578c8c65b575486bc307d075fa889a2f92d4f425331e0c40a1e1036a92d5a
MD5 73f5e857879182d03c37a0719d75a7a9
BLAKE2b-256 e012201eebaf25a233f1006079dfac3ca6eadee4146f0aa4e718888680e5641e

See more details on using hashes here.

File details

Details for the file vllm_omni-0.18.0-py3-none-any.whl.

File metadata

  • Download URL: vllm_omni-0.18.0-py3-none-any.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for vllm_omni-0.18.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a70e4ebf7df0a02ff2a76e04957349471c9ba831b3546ecba2b484d6a53c7ea3
MD5 f75afe07050295f0af802680a66a85db
BLAKE2b-256 81602de2f16d48edef141444b70004374ae009cfe20e07c0c32724ce4c5a71d3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page