Skip to main content

Benchmark sagemaker serverless endpoints for cost and performance

Project description

SageMaker Serverless Inference Toolkit

Tools to benchmark sagemaker serverless endpoint configurations and help find the most optimal one

Installation and Prerequisites

To install the toolkit into your environment, first clone this repo. Then inside of the repo directory run

pip install sm-serverless-benchmarking

In order to run the benchmark, your user profile or execution role would need to have the appropriate IAM Permissions Including:

SageMaker

  • CreateModel
  • CreateEndpointConfig / DeleteEndpointConfig
  • CreateEndpoint / DeleteEndpoint
  • CreateProcessingJob (if using SageMaker Runner)

SageMaker Runtime

  • InvokeEndpoint

CloudWatch

  • GetMetricStatistics

Quick Start

To run a benchmark locally, provide your sagemaker Model name and a list of example invocation arguments. Each of these arguments will be passed directly to the SageMaker runtime InvokeEndpoint API

from sm_serverless_benchmarking import benchmark
from sm_serverless_benchmarking.utils import convert_invoke_args_to_jsonl

model_name = "<SageMaker Model Name>"

example_invoke_args = [
               {'Body': '1,2,3,4,5', "ContentType": "text/csv"},
               {'Body': '6,7,8,9,10', "ContentType": "text/csv"}
              ]
              
example_args_file = convert_invoke_args_to_jsonl(example_invoke_args, 
                                                 output_path=".")
              
r = benchmark.run_serverless_benchmarks(model_name, example_args_file)

Alternativelly, you can run the benchmarks as SageMaker Processing job

from sm_serverless_benchmarking.sagemaker_runner import run_as_sagemaker_job

run_as_sagemaker_job(
        role="<execution_role_arn>",
        model_name="<model_name>",
        invoke_args_examples_file="<invoke_args_examples_file>",
    )

A utility function sm_serverless_benchmarking.utils.convert_invoke_args_to_jsonl is provided to convert a list of invocation argument examples into a JSONLines file. If working with data that cannot be serialized to JSON such as binary data including images, audio, and video, use the sm_serverless_benchmarking.utils.convert_invoke_args_to_pkl function which will serilize the examples to a pickle file instead.

Refer to the sample_notebooks directory for complete examples

Types of Benchmarks

By default two types of benchmarks will be executed

  • Stability Benchmark For each memory configuration, and a MaxConcurency of 1, will invoke the endpoint a specified number of times sequentially. The goal of this benchmark is to determine the most cost effective and stable memory configuration
  • Concurrency Benchmark Will invoke an endpoint with a simulated number of concurrent clients under different MaxConcurrency configurations

Configuring the Benchmarks

For either of the two approaches described above, you can specify a number of parameters to configure the benchmarking job

    cold_start_delay (int, optional): Number of seconds to sleep before starting the benchmark. Helps to induce a cold start on initial invocation. Defaults to 0.
    memory_sizes (List[int], optional): List of memory configurations to benchmark Defaults to [1024, 2048, 3072, 4096, 5120, 6144].
    stability_benchmark_invocations (int, optional): Total number of invocations for the stability benchmark. Defaults to 1000.
    stability_benchmark_error_thresh (int, optional): The allowed number of endpoint invocation errors before the benchmark is terminated for a configuration. Defaults to 3.
    include_concurrency_benchmark (bool, optional): Set True to run the concurrency benchmark with the optimal configuration from the stability benchmark. Defaults to True.
    concurrency_benchmark_max_conc (List[int], optional): A list of max_concurency settings to benchmark. Defaults to [2, 4, 8].
    concurrency_benchmark_invocations (int, optional): Total number of invocations for the concurency benchmark. Defaults to 1000.
    concurrency_num_clients_multiplier (List[int], optional): List of multipliers to specify the number of simulated clients which is determined by max_concurency * multiplier. Defaults to [1, 1.5, 1.75, 2].
    result_save_path (str, optional): The location to which the output artifacts will be saved. Defaults to ".".

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sm-serverless-benchmarking-0.2.3.tar.gz (19.3 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file sm-serverless-benchmarking-0.2.3.tar.gz.

File metadata

File hashes

Hashes for sm-serverless-benchmarking-0.2.3.tar.gz
Algorithm Hash digest
SHA256 c5cccd5af15d7414c62833316b193e35839cf3117812fc4d6c714f1ca22d22a1
MD5 6e279ad96d5b8d595a36c7be09637106
BLAKE2b-256 9acba02b10b69c667f6a1f96aa2407a2119ae3a9fbc2355d26221d78e9ab095f

See more details on using hashes here.

File details

Details for the file sm_serverless_benchmarking-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for sm_serverless_benchmarking-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 1e8385dea704819306472b417756ce99467bb81c7f09ce5636560207c987bead
MD5 20919ced74d3f6073d9441356387d310
BLAKE2b-256 1083f6bd87853e8aa195eab8515b6497f322450f62e42aa4e6886acc25337438

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page