Skip to main content

Optimum TPU is the interface between the Hugging Face Transformers library and Google Cloud TPU devices.

Project description

Optimum-TPU

Take the most out of Google Cloud TPUs with the ease of 🤗 transformers

Documentation license Optimum TPU / Test TGI on TPU

Tensor Processing Units (TPU) are AI accelerator made by Google to optimize performance and cost from AI training to inference.

This repository exposes an interface similar to what Hugging Face transformers library provides to interact with a magnitude of models developed by research labs, institutions and the community.

We aim at providing our user the best possible performances targeting Google Cloud TPUs for both training and inference working closely with Google and Google Cloud to make this a reality.

Supported Model and Tasks

We currently support a few LLM models targeting text generation scenarios:

  • 💎 Gemma (2b, 7b)
  • 🦙 Llama2 (7b) and Llama3 (8b)
  • 💨 Mistral (7b)

Installation

optimum-tpu comes with an handy PyPi released package compatible with your classical python dependency management tool.

pip install optimum-tpu -f https://storage.googleapis.com/libtpu-releases/index.html

export PJRT_DEVICE=TPU

Inference

optimum-tpu provides a set of dedicated tools and integrations in order to leverage Cloud TPUs for inference, especially on the latest TPU version v5e.

Other TPU versions will be supported along the way.

Text-Generation-Inference

As part of the integration, we do support a text-generation-inference (TGI) backend allowing to deploy and serve incoming HTTP requests and execute them on Cloud TPUs.

Please see the TGI specific documentation on how to get started.

JetStream Pytorch Engine

optimum-tpu provides an optional support of JetStream Pytorch engine inside of TGI. This support can be installed using the dedicated CLI command:

optimum-tpu install-jetstream-pytorch

To enable the support, export the environment variable JETSTREAM_PT=1.

Training

Fine-tuning is supported and tested on the TPU v5e. We have tested so far:

  • 🦙 Llama-2 7B and Llama-3 8B
  • 💎 Gemma 2B and 7B

You can check the examples:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

optimum_tpu-0.2.1.tar.gz (135.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

optimum_tpu-0.2.1-py3-none-any.whl (79.8 kB view details)

Uploaded Python 3

File details

Details for the file optimum_tpu-0.2.1.tar.gz.

File metadata

  • Download URL: optimum_tpu-0.2.1.tar.gz
  • Upload date:
  • Size: 135.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for optimum_tpu-0.2.1.tar.gz
Algorithm Hash digest
SHA256 d55458d7982b79b7f51e50ce802b5caea16a19e5c19aa1a82b5e4c70d9292ab0
MD5 ab0047e4f843e53af576f839a2248f87
BLAKE2b-256 8778acef460a5f995b75e58eb41ed55fda144e6b504558567263e4975e98fe29

See more details on using hashes here.

File details

Details for the file optimum_tpu-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: optimum_tpu-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 79.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for optimum_tpu-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a1c7c51a6c127e7ed09b71dc54ee91a0d7eac76d7d1a6532c43eb25a186ee9f6
MD5 8e19b0e925eceee3213b2823175e054f
BLAKE2b-256 70e61dcc248bf31ecf04181de79d3caabe04c69dee86ee529b3a6e5badc42f5b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page