Skip to main content

TPU Monitoring Dashboard

Project description

TPU-TOP

A simple terminal-based monitoring dashboard for Google Cloud TPUs, designed to give you real-time visibility into your machine's performance both on the host and the device.

[!NOTE] This tool was inspired by the nvitop project for GPUs. This is a community project and not an official Google product.

PyPI version GitHub Repository | PyPI Project

Project Overview

tpu-top provides a visual, TUI (Terminal User Interface) to monitor system and TPU resources. It is tailored to run it directly on a TPU instance either on a GCE VM or a GKE Pod.

tpu-top UI

What You Can See

  • TPU Memory & Utilization: Real-time memory usage, TensorCore utilization, and raw duty cycle for each TPU device.
  • History Graphs: Visual graphs with timeline markers showing the history of CPU (with core count), RAM (with GiB usage), and TPU usage.
  • Duty Cycle History: A dedicated panel showing the history of TPU duty cycle.
  • PIDs per TPU: A dedicated process list showing which PIDs are utilizing specific TPU devices, including their host RAM and CPU impact.
  • Active HLO Ops: Current HLO operations executing on each TPU core (Tensor Cores and Sparse Cores).

Calculations Explained

Duty Cycle

Duty Cycle represents the percentage of time the TPU is "busy" (not idle) during a given sampling window.

Performance Insights:

  • High Duty Cycle (e.g., >90%): The TPU is constantly running kernels and is not waiting on the host.
  • Low Duty Cycle (e.g., <30%): This is often a sign of "data starvation." The TPU is idle because it is waiting for the CPU to provide input data.

TensorCore Utilization

TensorCore Utilization measures the computational intensity of the workload. It tracks what percentage of the TPU's peak theoretical matrix-multiplication capacity is actually being used while the chip is active.

Performance Insights:

  • Low TensorCore Utilization: If your Duty Cycle is high but your TensorCore Utilization is low, your TPU is "busy," but it isn't doing much math. This often occurs when:
    • Batch sizes are too small to saturate the hardware.
    • The model is limited by memory bandwidth rather than compute.
    • The code spends a lot of time on non-matrix operations (e.g., scalar transposes).

How to use them together

  • Low Duty Cycle + Low TensorCore Util: Your TPU is mostly idle, likely waiting for data from the CPU.
  • High Duty Cycle + Low TensorCore Util: Your TPU is constantly working, but the specific operations (kernels) you are running are not computationally dense (likely memory-bound or using small batch sizes).
  • High Duty Cycle + High TensorCore Util: Ideal performance; you are keeping the TPU busy and fully utilizing its matrix-multiplication hardware.

Installation

From PyPI (Recommended)

pip install tpu-top

From Source

You can also install tpu-top directly from the source directory.

Prerequisites

Ensure you have Python 3.10+ and access to a Cloud TPU environment. The tool relies on tpu-info to communicate with the TPU driver.

Standard Source Install

Navigate to the project root directory and run:

pip install .

Developer Install

If you are making modifications and want them to reflect immediately:

pip install -e .

How to Use

Once installed, you can launch the dashboard from anywhere in your terminal:

tpu-top

Running Tests

To validate changes, run the unit tests:

python -m unittest test_main.py

(Note: If testing inside a GKE container, ensure dependencies are installed in your target environment).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tpu_top-0.1.7.tar.gz (17.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tpu_top-0.1.7-py3-none-any.whl (16.3 kB view details)

Uploaded Python 3

File details

Details for the file tpu_top-0.1.7.tar.gz.

File metadata

  • Download URL: tpu_top-0.1.7.tar.gz
  • Upload date:
  • Size: 17.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for tpu_top-0.1.7.tar.gz
Algorithm Hash digest
SHA256 8edad6cd295e716dec84fac8d4d2611c34f47827a102d7b7974385a46f88e17a
MD5 d27ef0366912051dd00ba28f6953b643
BLAKE2b-256 967d19acf8f356d0a1197017d32f272bbb006fc1ef4bb56a71330f6a5273f3ad

See more details on using hashes here.

File details

Details for the file tpu_top-0.1.7-py3-none-any.whl.

File metadata

  • Download URL: tpu_top-0.1.7-py3-none-any.whl
  • Upload date:
  • Size: 16.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for tpu_top-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 1acc1e8d4ffd13f148143e356bde27337c3acba48878fce76128dc8d52ee154d
MD5 e36541377b51ff80443433d5d0056a42
BLAKE2b-256 4bd60e74cea54e380fb26c29ed534bd732819d008537f8a923e073d6eb5404f0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page