Skip to main content

Optimum TPU is the interface between the Hugging Face Transformers library and Google Cloud TPU devices.

Project description

Optimum-TPU

Take the most out of Google Cloud TPUs with the ease of 🤗 transformers

Documentation license Optimum TPU / Test TGI on TPU

Tensor Processing Units (TPU) are AI accelerator made by Google to optimize performance and cost from AI training to inference.

This repository exposes an interface similar to what Hugging Face transformers library provides to interact with a magnitude of models developed by research labs, institutions and the community.

We aim at providing our user the best possible performances targeting Google Cloud TPUs for both training and inference working closely with Google and Google Cloud to make this a reality.

Supported Model and Tasks

We currently support a few LLM models targeting text generation scenarios:

  • 💎 Gemma (2b, 7b)
  • 🦙 Llama2 (7b) and Llama3 (8b). On Text Generation Inference with Jetstream Pytorch, also Llama3.1, Llama3.2 and Llama3.3 (text-only models) are supported, up to 70B parameters.
  • 💨 Mistral (7b)

Installation

optimum-tpu comes with an handy PyPi released package compatible with your classical python dependency management tool.

pip install optimum-tpu -f https://storage.googleapis.com/libtpu-releases/index.html

export PJRT_DEVICE=TPU

Inference

optimum-tpu provides a set of dedicated tools and integrations in order to leverage Cloud TPUs for inference, especially on the latest TPU version v5e and v6e.

Other TPU versions will be supported along the way.

Text-Generation-Inference

As part of the integration, we do support a text-generation-inference (TGI) backend allowing to deploy and serve incoming HTTP requests and execute them on Cloud TPUs.

Please see the TGI specific documentation on how to get started.

JetStream Pytorch Engine

optimum-tpu provides an optional support of JetStream Pytorch engine inside of TGI. This support can be installed using the dedicated CLI command:

optimum-tpu install-jetstream-pytorch

To enable the support, export the environment variable JETSTREAM_PT=1.

Training

Fine-tuning is supported and tested on the TPU v5e. We have tested so far:

  • 🦙 Llama-2 7B, Llama-3 8B and newer;
  • 💎 Gemma 2B and 7B.

You can check the examples:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum_tpu-0.2.3.tar.gz (143.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

optimum_tpu-0.2.3-py3-none-any.whl (79.8 kB view details)

Uploaded Python 3

File details

Details for the file optimum_tpu-0.2.3.tar.gz.

File metadata

  • Download URL: optimum_tpu-0.2.3.tar.gz
  • Upload date:
  • Size: 143.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.3

File hashes

Hashes for optimum_tpu-0.2.3.tar.gz
Algorithm Hash digest
SHA256 78c07b0115401e17bfce24faa08afa34c39239a79f7e06419e3fdf7bb6bc26f0
MD5 a104ac3147b2013fc9dc3f442e990fea
BLAKE2b-256 dc6186c3b1f37b177ac3102f6f1d7f9a8db753963e5399fef45811ff2b3b548b

See more details on using hashes here.

File details

Details for the file optimum_tpu-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: optimum_tpu-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 79.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.3

File hashes

Hashes for optimum_tpu-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 41fce7da42e265beff1a63bccf295db7547534112f15692a5d39d1c4ea680511
MD5 5298fd5868aea5bee56d93ef55751619
BLAKE2b-256 51159c6d3effecf26eac6e9cf4d2ea0d0c3c0a4a9297f26a1869b3ff2d3c6ae1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page