Skip to main content

This tool analyzes performance traces from TT-Metal operations, providing insights into throughput, bottlenecks, and optimization opportunities.

Project description

Performance Report Analysis Tool

Example perf report

This tool analyzes performance traces from Metal operations, providing insights into throughput, bottlenecks, and optimization opportunities.

Installation

This tool can be installed from PyPI:

pipx install tt-perf-report

Installing with pipx will automatically create a virtual environment and make the tt-perf-report command available.

Generating Performance Traces

  1. Build Metal with performance tracing (enabled in default build):
./build_metal
  1. Run your test in TT-Metal with the tracy module to capture traces:
python -m tracy -r -p -v -m pytest path/to/test.py

This generates a CSV file containing operation timing data.

Using Tracy Signposts

Tracy signposts mark specific sections of code for analysis. Add signposts to your Python code:

import tracy

# Mark different sections of your code
tracy.signpost("Compilation pass")
model(input_data)

tracy.signpost("Performance pass")
for _ in range(10):
    model(input_data)

The tool uses the last signpost by default, which is typically the most relevant section for a performance test(e.g., the final iteration after compilation / warmup).

Common signpost usage:

  • --start-signpost NAME: Analyze ops after the specified signpost
  • --end-signpost NAME: Analyze ops before the specified signpost
  • --ignore-signposts: Analyze the entire trace
  • --print-signposts: Prints any signposts within the window defined when using the start/end signpost arguments

Filtering Operations

The output of the performance report is a table of operations. Each operation is assigned a unique ID starting from 1. You can re-run the tool with different IDs to focus on specific sections of the trace.

Use --id-range to analyze specific sections:

# Analyze ops 5 through 10
tt-perf-report trace.csv --id-range 5-10

# Analyze from op 31 onwards
tt-perf-report trace.csv --id-range 31-

# Analyze up to op 12
tt-perf-report trace.csv --id-range -12

This is particularly useful for:

  • Isolating decode pass in prefill+decode LLM inference
  • Analyzing single transformer layers without embeddings/projections
  • Focusing on specific model components

Output Options

  • --min-percentage value: Hide ops below specified % of total time (default: 0.5)
  • --color/--no-color: Force colored/plain output
  • --csv FILENAME: Output the table to CSV format for further analysis or inclusion into automated reporting pipelines
  • --no-advice: Show only performance table, skip optimization advice

Understanding the Performance Report

The performance report provides several key metrics for analyzing operation performance:

Core Metrics

  • Device Time: Time spent executing the operation on device (in microseconds)
  • Op-to-op Gap: Time between operations, including host overhead and kernel dispatch (in microseconds)
  • Total %: Percentage of total execution time spent on this operation
  • Cores: Number of cores used by the operation (max 64 on Wormhole)

Performance Metrics

  • DRAM: Memory bandwidth achieved (in GB/s)
  • DRAM %: Percentage of theoretical peak DRAM bandwidth (288 GB/s on Wormhole)
  • FLOPs: Compute throughput achieved (in TFLOPs)
  • FLOPs %: Percentage of theoretical peak compute for the given math fidelity
  • Bound: Performance classification of the operation:
    • DRAM: Memory bandwidth bound (>65% of peak DRAM)
    • FLOP: Compute bound (>65% of peak FLOPs)
    • BOTH: Both memory and compute bound
    • SLOW: Neither memory nor compute bound
    • HOST: Operation running on host CPU

Additional Fields

  • Math Fidelity: Precision configuration used for matrix operations:
    • HiFi4: Highest precision (74 TFLOPs/core)
    • HiFi2: Medium precision (148 TFLOPs/core)
    • LoFi: Lowest precision (262 TFLOPs/core)

The tool automatically highlights potential optimization opportunities:

  • Red op-to-op times indicate high host or kernel launch overhead (>6.5μs)
  • Red core counts indicate underutilization (<10 cores)
  • Green metrics indicate good utilization of available resources
  • Yellow metrics indicate room for optimization

Examples

Note:
trace.csv in the examples below refers to your input CSV file (the performance trace you want to analyze).

Typical use:

tt-perf-report trace.csv

Merge traces captured on multiple machines from the same workload run:

tt-perf-report trace_host0.csv trace_host1.csv trace_host2.csv

Build a table of all ops with no advice:

tt-perf-report trace.csv --no-advice

View ops 100-200 with advice:

tt-perf-report trace.csv --id-range 100-200

Export the table of ops and columns as a CSV file:

tt-perf-report trace.csv --csv my_report.csv

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tt_perf_report-1.2.4.tar.gz (38.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tt_perf_report-1.2.4-py3-none-any.whl (33.3 kB view details)

Uploaded Python 3

File details

Details for the file tt_perf_report-1.2.4.tar.gz.

File metadata

  • Download URL: tt_perf_report-1.2.4.tar.gz
  • Upload date:
  • Size: 38.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for tt_perf_report-1.2.4.tar.gz
Algorithm Hash digest
SHA256 f3c2b13c8bace25d3bdca33acfa2c8882f5e7baa219db295eb23c3f588dea7c9
MD5 07dbcd9822460d796c318a5998b56830
BLAKE2b-256 d425c9821fb479ed372aa8fbe679ac3979c217f649ad571b18a0957b4f0ceb18

See more details on using hashes here.

Provenance

The following attestation bundles were made for tt_perf_report-1.2.4.tar.gz:

Publisher: build-pypi.yml on tenstorrent/tt-perf-report

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file tt_perf_report-1.2.4-py3-none-any.whl.

File metadata

  • Download URL: tt_perf_report-1.2.4-py3-none-any.whl
  • Upload date:
  • Size: 33.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for tt_perf_report-1.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 225989b5ab2b59c883212a91b9c3017d85566360c042be27c3ef40dade2d5c31
MD5 a6509046f31ba8b1450c134fb402e632
BLAKE2b-256 712f10c6c9a04a7754c7954b30d89e359dbd644bec671e0bc38e96999a03eff4

See more details on using hashes here.

Provenance

The following attestation bundles were made for tt_perf_report-1.2.4-py3-none-any.whl:

Publisher: build-pypi.yml on tenstorrent/tt-perf-report

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page