CLI tool to view TPU metrics
Project description
tpu-info CLI
tpu-info is a simple CLI tool for detecting Cloud TPU devices and reading
runtime metrics from libtpu, including memory usage and duty cycle. It
supports both a static, one-time snapshot and a live streaming mode to monitor
metrics continuously.
Note: to access libtpu utilization metrics, you must have a workload
running with a supported ML framework, such as JAX or PyTorch/XLA. See the
Usage section for more information.
What's New in Version 0.10.0
๐ New Features
- Adds
inbound_buffer_transfer_latencymetric: Tracks the latency of inbound buffer transfers - Adds
host_compute_latencymetric: Tracks the host compute latency
Installing
Install the latest release using pip:
pip install tpu-info
Alternatively, install tpu-info from source:
pip install git+https://github.com/AI-Hypercomputer/cloud-accelerator-diagnostics/#subdirectory=tpu_info
Usage
To view current TPU utilization data, tpu-info requires a running TPU workload
with supported ML framework[^1] such as JAX or PyTorch/XLA. For example:
# JAX
>>> import jax
>>> jax.device_count()
4
# Create a tensor on the TPU
>>> t = jax.numpy.ones((300, 300))
# PyTorch/XLA
>>> import torch
>>> import torch_xla
>>> t = torch.randn((300, 300), device=torch_xla.device())
Then, on the same machine, you can run the tpu-info command in your terminal.
Static Mode
Run the following command for a one-time snapshot of the current metrics.
$ tpu-info
Libtpu version: 0.0.19.dev20250721+nightly
Accelerator type: v6e
TPU Chips
| Chip | Type | Devices | PID |
|-------------|--------------|---------|---------|
| /dev/vfio/0 | TPU v6e chip | 1 | 1469584 |
| /dev/vfio/1 | TPU v6e chip | 1 | 1469584 |
| /dev/vfio/2 | TPU v6e chip | 1 | 1469584 |
| /dev/vfio/3 | TPU v6e chip | 1 | 1469584 |
TPU Runtime Utilization
| Chip | HBM Usage (GiB) | Duty cycle |
|------|-----------------------|------------|
| 0 | 18.45 GiB / 31.25 GiB | 100.00% |
| 1 | 10.40 GiB / 31.25 GiB | 100.00% |
| 2 | 10.40 GiB / 31.25 GiB | 100.00% |
| 3 | 10.40 GiB / 31.25 GiB | 100.00% |
TensorCore Utilization
| Core ID | TensorCore Utilization |
|---------|------------------------|
| 0 | 13.60% |
| 1 | 14.81% |
| 2 | 14.36% |
| 3 | 13.60% |
TPU Buffer Transfer Latency
| Buffer Size | P50 | P90 | P95 | P999 |
|--------------|--------------|--------------|--------------|--------------|
| 8MB+ | 108978.82 us | 164849.81 us | 177366.42 us | 212419.07 us |
| 4MB+ | 21739.38 us | 38126.84 us | 42110.12 us | 55474.21 us |
TPU Inbound Buffer Transfer Latency
| Buffer Size | P50 | P90 | P95 | P999 |
|--------------|--------------|--------------|--------------|--------------|
| 8MB+ | 18945.59 us | 34461.46 us | 39652.74 us | 56051.94 us |
| 4MB+ | 4829.09 us | 8594.43 us | 10236.53 us | 17754.86 us |
TPU Host Compute Latency
| Buffer Size | P50 | P90 | P95 | P999 |
|--------------|--------------|--------------|--------------|--------------|
| 8MB+ | 998.17 us | 3605.34 us | 6292.10 us | 11608.01 us |
| 4MB+ | 678.33 us | 2611.93 us | 5258.30 us | 11083.23 us |
TPU gRPC TCP Minimum RTT
| P50 | P90 | P95 | P999 |
|----------|----------|----------|----------|
| 35.99 us | 52.15 us | 53.83 us | 55.51 us |
TPU gRPC TCP Delivery Rate
| P50 | P90 | P95 | P999 |
|---------------|---------------|---------------|---------------|
| 12305.96 Mbps | 18367.10 Mbps | 24872.11 Mbps | 44841.55 Mbps |
Streaming Mode
You can run tpu-info in a streaming mode to periodically refresh and display
the utilization statistics.
# Refresh stats every 2 seconds
tpu-info --streaming --rate 2
Refresh rate: 0.1s
Last update: 2025-07-24 11:00:59 UTC
Libtpu version: 0.0.19.dev20250721+nightly
Accelerator type: v6e
TPU Chips
| Chip | Type | Devices | PID |
|--------------|--------------|---------|--------|
| /dev/vfio/0 | TPU v6e chip | 1 | 1022 |
| /dev/vfio/1 | TPU v6e chip | 1 | 1022 |
| /dev/vfio/2 | TPU v6e chip | 1 | 1022 |
| /dev/vfio/3 | TPU v6e chip | 1 | 1022 |
TPU Runtime Utilization
| Chip | HBM Usage (GiB) | Duty cycle |
|--------|--------------------------|------------|
| 8 | 17.26 GiB / 31.25 GiB | 100.00% |
| 9 | 9.26 GiB / 31.25 GiB | 100.00% |
| 12 | 9.26 GiB / 31.25 GiB | 100.00% |
| 13 | 9.26 GiB / 31.25 GiB | 100.00% |
TensorCore Utilization
| Core ID | TensorCore Utilization |
|---------|------------------------|
| 0 | 15.17% |
| 1 | 14.62% |
| 2 | 14.68% |
| 3 | 15.14% |
TPU Buffer Transfer Latency
| Buffer Size | P50 | P90 | P95 | P999 |
|--------------|--------------|--------------|--------------|--------------|
| 8MB+ | 18264.03 us | 33263.06 us | 35990.98 us | 53997.32 us |
TPU Inbound Buffer Transfer Latency
| Buffer Size | P50 | P90 | P95 | P999 |
|--------------|--------------|--------------|--------------|--------------|
| 8MB+ | 18316.95 us | 32857.03 us | 36501.59 us | 58854.54 us |
TPU Host Compute Latency
| Buffer Size | P50 | P90 | P95 | P999 |
|--------------|--------------|--------------|--------------|--------------|
| 8MB+ | 678.33 us | 2611.93 us | 5258.30 us | 11083.23 us |
TPU gRPC TCP Minimum RTT
| P50 | P90 | P95 | P999 |
|----------|----------|----------|----------|
| 35.99 us | 52.15 us | 53.83 us | 55.51 us |
TPU gRPC TCP Delivery Rate
| P50 | P90 | P95 | P999 |
|---------------|---------------|---------------|---------------|
| 12305.96 Mbps | 18367.10 Mbps | 24872.11 Mbps | 44841.55 Mbps |
Version
To check the installed version of tpu-info, libtpu version and accelerator
type of the TPU chip, use the --version or -v flag.
Compatible Environment:
$ tpu-info --version
- tpu-info version: 0.8.0
- libtpu version: 0.0.18
- accelerator type: v6e
Incompatible Environment (Python 3.12+):
$ tpu-info --version
- tpu-info version: 0.8.0
- libtpu version: N/A (incompatible environment)
- accelerator type: N/A (incompatible environment)
Process
You can use the --process or -p flag to display information about the
processes currently running on the TPU.
$ tpu-info --process
TPU Process Info
| Chip | PID | Process Name |
|-------------|--------|--------------|
| /dev/vfio/0 | 799657 | python3 |
| /dev/vfio/1 | 799657 | python3 |
| /dev/vfio/2 | 799657 | python3 |
| /dev/vfio/3 | 799657 | python3 |
| /dev/vfio/4 | 799657 | python3 |
| /dev/vfio/5 | 799657 | python3 |
| /dev/vfio/6 | 799657 | python3 |
| /dev/vfio/7 | 799657 | python3 |
List Metrics
You can use the --list_metrics flag to display all supported metrics that can
be given along with the --metric flag.
$ tpu-info --list_metrics
โญโ Supported Metrics โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ buffer_transfer_latency โ
โ collective_e2e_latency โ
โ core_state โ
โ device_to_host_transfer_latency โ
โ duty_cycle_percent โ
โ grpc_tcp_delivery_rate โ
โ grpc_tcp_min_rtt โ
โ hbm_usage โ
โ hlo_exec_timing โ
โ hlo_queue_size โ
โ host_compute_latency โ
โ host_to_device_transfer_latency โ
โ inbound_buffer_transfer_latency โ
โ queued_programs โ
โ sequencer_state โ
โ sequencer_state_detailed โ
โ tensorcore_utilization โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Metric
You can use the --metric flag to display specific metrics. You can specify
multiple metrics separated by spaces with multiple --metric flags.
$ tpu-info --metric duty_cycle_percent --metric hbm_usage
TPU Duty Cycle
| Core ID | Duty Cycle (%) |
|---------|----------------|
| 0 | 100.00% |
| 1 | 100.00% |
| 2 | 100.00% |
| 3 | 100.00% |
| 4 | 100.00% |
| 5 | 100.00% |
| 6 | 100.00% |
| 7 | 100.00% |
TPU HBM Usage
| Chip | HBM Usage (GiB) |
|--------|-----------------------|
| 0 | 29.50 GiB / 31.25 GiB |
| 1 | 21.50 GiB / 31.25 GiB |
| 2 | 21.50 GiB / 31.25 GiB |
| 3 | 21.50 GiB / 31.25 GiB |
| 4 | 21.50 GiB / 31.25 GiB |
| 5 | 21.50 GiB / 31.25 GiB |
| 6 | 21.50 GiB / 31.25 GiB |
| 7 | 21.50 GiB / 31.25 GiB |
[^1]: Releases from before 2024 may not be compatible.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tpu_info-0.10.0-py3-none-any.whl.
File metadata
- Download URL: tpu_info-0.10.0-py3-none-any.whl
- Upload date:
- Size: 37.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
25215c1ec11d26772f648198901648b6c293ffc30665e5e76a1772c594893ba1
|
|
| MD5 |
4b298fdb9adb38edeeebc74251462ffd
|
|
| BLAKE2b-256 |
9913cdc4281728cadc16b4d9b9f367981d59845b0bbecafd046c6a777ad091c4
|