Skip to main content

TPU Monitoring Dashboard

Project description

TPU-TOP

A simple terminal-based monitoring dashboard for Google Cloud TPUs, designed to give you real-time visibility into your machine's performance both on the host and the device.

[!NOTE] This tool was inspired by the nvitop project for GPUs. This is a community project and not an official Google product.

PyPI version GitHub Repository | PyPI Project

Project Overview

tpu-top provides a visual, TUI (Terminal User Interface) to monitor system and TPU resources. It is tailored to run it directly on a TPU instance either on a GCE VM or a GKE Pod.

tpu-top UI

What You Can See

  • TPU Memory & Utilization: Real-time memory usage, TensorCore utilization, and raw duty cycle for each TPU device.
  • History Graphs: Visual graphs with timeline markers showing the history of CPU (with core count), RAM (with GiB usage), and TPU usage.
  • Duty Cycle History: A dedicated panel showing the history of TPU duty cycle.
  • PIDs per TPU: A dedicated process list showing which PIDs are utilizing specific TPU devices, including their host RAM and CPU impact.
  • Active HLO Ops: Current HLO operations executing on each TPU core (Tensor Cores and Sparse Cores).

Calculations Explained

Duty Cycle

Duty Cycle represents the percentage of time the TPU is "busy" (not idle) during a given sampling window.

Performance Insights:

  • High Duty Cycle (e.g., >90%): The TPU is constantly running kernels and is not waiting on the host.
  • Low Duty Cycle (e.g., <30%): This is often a sign of "data starvation." The TPU is idle because it is waiting for the CPU to provide input data.

TensorCore Utilization

TensorCore Utilization measures the computational intensity of the workload. It tracks what percentage of the TPU's peak theoretical matrix-multiplication capacity is actually being used while the chip is active.

Performance Insights:

  • Low TensorCore Utilization: If your Duty Cycle is high but your TensorCore Utilization is low, your TPU is "busy," but it isn't doing much math. This often occurs when:
    • Batch sizes are too small to saturate the hardware.
    • The model is limited by memory bandwidth rather than compute.
    • The code spends a lot of time on non-matrix operations (e.g., scalar transposes).

How to use them together

  • Low Duty Cycle + Low TensorCore Util: Your TPU is mostly idle, likely waiting for data from the CPU.
  • High Duty Cycle + Low TensorCore Util: Your TPU is constantly working, but the specific operations (kernels) you are running are not computationally dense (likely memory-bound or using small batch sizes).
  • High Duty Cycle + High TensorCore Util: Ideal performance; you are keeping the TPU busy and fully utilizing its matrix-multiplication hardware.

Installation

From PyPI (Recommended)

pip install tpu-top

From Source

You can also install tpu-top directly from the source directory.

Prerequisites

Ensure you have Python 3.10+ and access to a Cloud TPU environment. The tool relies on tpu-info to communicate with the TPU driver.

Standard Source Install

Navigate to the project root directory and run:

pip install .

Developer Install

If you are making modifications and want them to reflect immediately:

pip install -e .

How to Use

Once installed, you can launch the dashboard from anywhere in your terminal:

tpu-top

Running Tests

To validate changes, run the unit tests:

python -m unittest test_main.py

(Note: If testing inside a GKE container, ensure dependencies are installed in your target environment).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tpu_top-0.1.6.tar.gz (17.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tpu_top-0.1.6-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file tpu_top-0.1.6.tar.gz.

File metadata

  • Download URL: tpu_top-0.1.6.tar.gz
  • Upload date:
  • Size: 17.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for tpu_top-0.1.6.tar.gz
Algorithm Hash digest
SHA256 2d4c0024f4e90dda3401b66d6ead7b62d7854cdf1a5279ee44e4c8ac2f3d8f83
MD5 10243181c3f307522efa6daaae4053d6
BLAKE2b-256 03213ebf8f666c08a327cab3ae50c0eacd05fa60469d41efda35d3f2e5acdb67

See more details on using hashes here.

File details

Details for the file tpu_top-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: tpu_top-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 15.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.11

File hashes

Hashes for tpu_top-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 973ab7bc8d502bbf6158f03b8cd138b84283a47396c56e6770804903447dac86
MD5 43bca4d2d96601442ab6c0ca09e08919
BLAKE2b-256 68f88db033404b5d659711f34a259d05ea57a36f1d5fab448eeafec4673f7a54

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page