Skip to main content

Triton Performance Analyzer

Project description

Triton Performance Analyzer

Triton Performance Analyzer is CLI tool which can help you optimize the inference performance of models running on Triton Inference Server by measuring changes in performance as you experiment with different optimization strategies.

⚠️ Warning: genai-perf is being deprecated. Please migrate to AIPerf for continued support and enhanced features.


Features

Inference Load Modes

  • Concurrency Mode simlulates load by maintaining a specific concurrency of outgoing requests to the server

  • Request Rate Mode simulates load by sending consecutive requests at a specific rate to the server

  • Custom Interval Mode simulates load by sending consecutive requests at specific intervals to the server

Performance Measurement Modes

  • Time Windows Mode measures model performance repeatedly over a specific time interval until performance has stabilized

  • Count Windows Mode measures model performance repeatedly over a specific number of requests until performance has stabilized

Other Features


Quick Start

The steps below will guide you on how to start using Perf Analyzer.

Step 1: Start Triton Container

export RELEASE=<yy.mm> # e.g. to use the release from the end of February of 2023, do `export RELEASE=23.02`

docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3

docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3

Step 2: Download simple Model

# inside triton container
git clone --depth 1 https://github.com/triton-inference-server/server

mkdir model_repository ; cp -r server/docs/examples/model_repository/simple model_repository

Step 3: Start Triton Server

# inside triton container
tritonserver --model-repository $(pwd)/model_repository &> server.log &

# confirm server is ready, look for 'HTTP/1.1 200 OK'
curl -v localhost:8000/v2/health/ready

# detach (CTRL-p CTRL-q)

Step 4: Start Triton SDK Container

docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk

docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk

Step 5: Run Perf Analyzer

# inside sdk container
perf_analyzer -m simple

See the full quick start guide for additional tips on how to analyze output.


Documentation


Contributing

Contributions to Triton Perf Analyzer are more than welcome. To contribute please review the contribution guidelines, then fork and create a pull request.


Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this project. When help with code is needed, follow the process outlined in the Stack Overflow (https://stackoverflow.com/help/mcve) document. Ensure posted examples are:

  • minimal - use as little code as possible that still produces the same problem

  • complete - provide all parts needed to reproduce the problem. Check if you can strip external dependency and still show the problem. The less time we spend on reproducing problems the more time we have to fix it

  • verifiable - test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

perf_analyzer-2.60.0-py3-none-manylinux_2_38_x86_64.whl (7.2 MB view details)

Uploaded Python 3manylinux: glibc 2.38+ x86-64

perf_analyzer-2.60.0-py3-none-manylinux_2_38_aarch64.whl (6.8 MB view details)

Uploaded Python 3manylinux: glibc 2.38+ ARM64

File details

Details for the file perf_analyzer-2.60.0-py3-none-manylinux_2_38_x86_64.whl.

File metadata

File hashes

Hashes for perf_analyzer-2.60.0-py3-none-manylinux_2_38_x86_64.whl
Algorithm Hash digest
SHA256 23ef56b107ae9233e5cf09fbdf508dd9b936a52f1bae2f63f53c8117087b3f25
MD5 428d39eec5d5db3ae8acf6a4d594e59c
BLAKE2b-256 f59c01f1f189aa22f060d603a8de131836b834b0e5159a36bb7b44c980d60aeb

See more details on using hashes here.

File details

Details for the file perf_analyzer-2.60.0-py3-none-manylinux_2_38_aarch64.whl.

File metadata

File hashes

Hashes for perf_analyzer-2.60.0-py3-none-manylinux_2_38_aarch64.whl
Algorithm Hash digest
SHA256 c2ece0adb7760ea1abc258222d80bf8e367fa9639401406b96da2c9b55131d29
MD5 182e0a74343099114a7145d3653efa32
BLAKE2b-256 978219f0c99aab7fe3285eea92d8b9a9114b907ea1e59f9ad67420221389d64f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page