Skip to main content

A lightweight, cross-platform latency and throughput profiler for LLMs

Reason this release was yanked:

Regression: breaks RunningStats datetime comparisons. Use 0.1.10 instead.

Project description

LLMeter (Logo)

Measuring large language models latency and throughput

Latest Version Documentation: Online Supported Python Versions Code Style: Ruff

LLMeter is a pure-python library for simple latency and throughput testing of large language models (LLMs). It's designed to be lightweight to install; straightforward to run standard tests; and versatile to integrate - whether in notebooks, CI/CD, or other workflows.

📖 For full details, check out our documentation at: https://awslabs.github.io/llmeter

🛠️ Installation

LLMeter requires python>=3.10, please make sure your current version of python is compatible.

To install the basic metering functionalities, you can install the minimum package using pip or uv:

pip install llmeter

Or with uv (recommended for faster installation):

uv pip install llmeter

LLMeter also offers extra features that require additional dependencies. Currently these extras include:

  • plotting: Add methods to generate charts to summarize the results
  • openai: Enable testing endpoints offered by OpenAI
  • litellm: Enable testing a range of different models through LiteLLM
  • mlflow: Enable logging LLMeter experiments to MLFlow

You can install one or more of these extra options using pip:

pip install 'llmeter[plotting,openai,litellm,mlflow]'

Or with uv:

uv pip install 'llmeter[plotting,openai,litellm,mlflow]'

🚀 Quick-start

At a high level, you'll start by configuring an LLMeter "Endpoint" for whatever type of LLM you're connecting to:

# For example with Amazon Bedrock...
from llmeter.endpoints import BedrockConverse
endpoint = BedrockConverse(model_id="...")

# ...or OpenAI...
from llmeter.endpoints import OpenAIEndpoint
endpoint = OpenAIEndpoint(model_id="...", api_key="...")

# ...or via LiteLLM...
from llmeter.endpoints import LiteLLM
endpoint = LiteLLM("{provider}/{model_id}")

# ...and so on

You can then run the high-level "experiments" offered by LLMeter:

# Testing how throughput varies with concurrent request count:
from llmeter.experiments import LoadTest
load_test = LoadTest(
    endpoint=endpoint,
    payload={...},
    sequence_of_clients=[1, 5, 20, 50, 100, 500],
    output_path="local or S3 path"
)
load_test_results = await load_test.run()
load_test_results.plot_results()

Where payload can be a single dictionary, a list of dictionary, or a path to a JSON Line file that contains a payload for every line.

Each LLMeter Endpoint type offers a create_payload() function you can use to help build your inputs, in case you're not sure of the request JSON format for your target API. For example with Amazon Bedrock Converse:

from llmeter.prompt_utils import ImageContent
payload = BedrockConverse.create_payload(
    user_messages=[
        "Describe the following image:",
        ImageContent.from_path("photo.jpg"),
    ],
    max_tokens=1024,
)

As well as the high-level Experiments, you can use the low-level llmeter.runner.Runner class to run and analyze request batches - and build your own custom experiments.

from llmeter.runner import Runner

endpoint_test = Runner(
    endpoint,
    tokenizer=tokenizer,
    output_path="local or S3 path",
)
result = await endpoint_test.run(
    payload={...},
    n_requests=3,
    clients=3,
)

print(result.stats)

Additional functionality like cost modelling and MLFlow experiment tracking is enabled through llmeter.callbacks, and you can write your own callbacks to hook other custom logic into LLMeter test runs.

For more details, check out the LLMeter user guide and our selection of end-to-end code examples in the examples folder!

Analyze and compare results

You can analyze the results of a single run or a load test by generating interactive charts. You can find examples in in the examples folder.

Load testing

You can generate a collection of standard charts to visualize the result of a load test:

# Load test results
from llmeter.experiments import LoadTestResult
load_test_result = LoadTestResult.load("local or S3 path", test_name="Test result")

figures = load_test_result.plot_results()
Average input tokens Average output tokens
Error rate Request per minute
--- ---
Time to first token Time to last token

You can see how to compare two load test in Compare load test.

Single Run visualizations

Metrics like time to first token (TTFT) and time per output token (TPOT) are described as distributions. While statistical descriptions of these distributions (median, 90th percentile, average, etc.) are a convenient way to compare them, visualizations provide insights on the endpoint behavior.

Boxplot

import plotly.graph_objects as go
from llmeter.plotting import boxplot_by_dimension

result = Result.load("local or S3 path")

fig = go.Figure()
trace = boxplot_by_dimension(result=result, dimension="time_to_first_token")
fig.add_trace(trace)

Multiple traces can easily be combined into the same figure.

alt text

Histograms

import plotly.graph_objects as go
from llmeter.plotting import histogram_by_dimension

result = Result.load("local or S3 path")

fig = go.Figure()
trace = histogram_by_dimension(result=result, dimension="time_to_first_token", xbins={"size":0.02})
fig.add_trace(trace)

Multiple traces can easily be combined into the same figure.

alt text

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmeter-0.1.10.1.tar.gz (555.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmeter-0.1.10.1-py3-none-any.whl (108.3 kB view details)

Uploaded Python 3

File details

Details for the file llmeter-0.1.10.1.tar.gz.

File metadata

  • Download URL: llmeter-0.1.10.1.tar.gz
  • Upload date:
  • Size: 555.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llmeter-0.1.10.1.tar.gz
Algorithm Hash digest
SHA256 e5ffc3960ee428ffeb3fbf79e87d2ce7623ab6b629dcff43d530d37a8f1a733d
MD5 8a3757da90ee2a883ceceaa43030e456
BLAKE2b-256 e22702b5b8e87ef674230f7486265066caf91dc825958741bd54a2c6edbcec3e

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmeter-0.1.10.1.tar.gz:

Publisher: pypi.yml on awslabs/llmeter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llmeter-0.1.10.1-py3-none-any.whl.

File metadata

  • Download URL: llmeter-0.1.10.1-py3-none-any.whl
  • Upload date:
  • Size: 108.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for llmeter-0.1.10.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4aa239f0ca90f3c0da04aa681f03acef27f85b2f348726db34a596629c2abe3e
MD5 38596bab4bad908b8c8fd6a8417d9b1c
BLAKE2b-256 ecdea47fe5f2aba000b7be81146886e51e90feb456adb5541dc82536f9b71974

See more details on using hashes here.

Provenance

The following attestation bundles were made for llmeter-0.1.10.1-py3-none-any.whl:

Publisher: pypi.yml on awslabs/llmeter

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page