Skip to main content

No project description provided

Project description

vLLM TPU vLLM TPU

| Documentation | Blog | User Forum | Developer Slack (#sig-tpu) |


Upcoming Events 🔥

Latest News 🔥

Previous News 🔥

About

vLLM TPU is now powered by tpu-inference, an expressive and powerful new hardware plugin unifying JAX and PyTorch under a single lowering path within the vLLM project. The new backend now provides a framework for developers to:

  • Push the limits of TPU hardware performance in open source.
  • Provide more flexibility to JAX and PyTorch users by running PyTorch model definitions performantly on TPU without any additional code changes, while also extending native support to JAX.
  • Retain vLLM standardization: keep the same user experience, telemetry, and interface.

Recommended models and features

Although vLLM TPU’s new unified backend makes out-of-the-box high performance serving possible with any model supported in vLLM, the reality is that we're still in the process of implementing a few core components.

For this reason, we’ve provided a Recommended Models and Features page detailing the models and features that are validated through unit, integration, and performance testing.

Get started

Get started with vLLM on TPUs by following the quickstart guide.

Visit our documentation to learn more.

Compatible TPU Generations

  • Recommended: v5e, v6e
  • Experimental: v3, v4, v5p

Check out a few v6e recipes here!

Contribute

We're always looking for ways to partner with the community to accelerate vLLM TPU development. If you're interested in contributing to this effort, check out the Contributing guide and Issues to start. We recommend filtering Issues on the good first issue tag if it's your first time contributing.

Contact us

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tpu_inference-0.12.0.dev20251206.tar.gz (365.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tpu_inference-0.12.0.dev20251206-py3-none-any.whl (434.8 kB view details)

Uploaded Python 3

File details

Details for the file tpu_inference-0.12.0.dev20251206.tar.gz.

File metadata

File hashes

Hashes for tpu_inference-0.12.0.dev20251206.tar.gz
Algorithm Hash digest
SHA256 c29d3d82b6450701e200c7367c4d0508743dc9d5b63475b30bf13c318e3c2343
MD5 2928a51287c342e560f90d1b099819d2
BLAKE2b-256 5a2de710ede4877af74bacd9bb46814f94f6ec7d3802294c05b9f61403bfd3e4

See more details on using hashes here.

Provenance

The following attestation bundles were made for tpu_inference-0.12.0.dev20251206.tar.gz:

Publisher: release.yml on vllm-project/tpu-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tpu_inference-0.12.0.dev20251206-py3-none-any.whl.

File metadata

File hashes

Hashes for tpu_inference-0.12.0.dev20251206-py3-none-any.whl
Algorithm Hash digest
SHA256 d728aa2af5a18c9250e0b6c84a71b5c6a6c90d6654b4c9d31237ab2e5cd0935b
MD5 54f98d66c2889b2982d4f9c91cf87bcf
BLAKE2b-256 c4334232de2f10c63e6edfa8559bf6c6d22e1efc0991e9ecb7f581ad3c023b5b

See more details on using hashes here.

Provenance

The following attestation bundles were made for tpu_inference-0.12.0.dev20251206-py3-none-any.whl:

Publisher: release.yml on vllm-project/tpu-inference

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page