Skip to main content

No project description provided

Project description

vLLM TPU vLLM TPU

| Documentation | Blog | User Forum | Developer Slack |


Upcoming Events 🔥

Latest News 🔥

  • [2025/10] vLLM TPU: A New Unified Backend Supporting PyTorch and JAX on TPU
Previous News 🔥

About

vLLM TPU is now powered by tpu-inference, an expressive and powerful new hardware plugin unifying JAX and PyTorch under a single lowering path within the vLLM project. The new backend now provides a framework for developers to:

  • Push the limits of TPU hardware performance in open source.
  • Provide more flexibility to JAX and PyTorch users by running PyTorch model definitions performantly on TPU without any additional code changes, while also extending native support to JAX.
  • Retain vLLM standardization: keep the same user experience, telemetry, and interface.

Recommended models and features

Although vLLM TPU’s new unified backend makes out-of-the-box high performance serving possible with any model supported in vLLM, the reality is that we're still in the process of implementing a few core components.

For this reason, we’ve provided a list of recommended models and features that are validated for accuracy and stress-tested for performance.

Get started

Get started with vLLM on TPUs by following the quickstart guide.

Visit our documentation to learn more.

Contribute

We're always looking for ways to partner with the community to accelerate vLLM TPU development. If you're interested in contributing to this effort, check out the Contributing guide and Issues to start. We recommend filtering Issues on the good first issue tag if it's your first time contributing.

Contact us

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tpu_inference-0.11.1.tar.gz (295.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tpu_inference-0.11.1-py3-none-any.whl (355.3 kB view details)

Uploaded Python 3

File details

Details for the file tpu_inference-0.11.1.tar.gz.

File metadata

  • Download URL: tpu_inference-0.11.1.tar.gz
  • Upload date:
  • Size: 295.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for tpu_inference-0.11.1.tar.gz
Algorithm Hash digest
SHA256 b8aa032bc8acc19b92741a09b114a92a8ba89de97eab422b256f550f267ca6f4
MD5 5d53523615cf30d6c73f5c076fa03e7f
BLAKE2b-256 5d900d396994fb4133d0b50a5802b474afbfdf4bb24b0fe2f22cf0ec32b1686a

See more details on using hashes here.

File details

Details for the file tpu_inference-0.11.1-py3-none-any.whl.

File metadata

  • Download URL: tpu_inference-0.11.1-py3-none-any.whl
  • Upload date:
  • Size: 355.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for tpu_inference-0.11.1-py3-none-any.whl
Algorithm Hash digest
SHA256 67b97c256e112cc790fcb91d082578f758df5a70e30b1f59b1d8c45a2445cc7f
MD5 6da2976c26acb05c8154d2b4e10c84f2
BLAKE2b-256 92f7d1207c1d1c9cba36b505ed66d7b8b8eb37ff87f2a94f0fb9951e30a64001

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page