Organise and analyse your pytest benchmarks
Project description
pytest-park
Organise and analyse your pytest benchmarks
Features
- Inline benchmark comparison printed directly in
pytestoutput — no extra commands needed. - Load pytest-benchmark JSON artifact folders and normalize runs, groups, marks, params, and custom grouping metadata.
- Compare reference runs against candidate runs with per-case and per-group delta and speedup summaries.
- Flexible grouping: custom keys, benchmark groups, marks, params, and postfix-based name normalization.
- Associate optional profiler artifacts with benchmark runs for code-level context.
- Serve an interactive local NiceGUI dashboard for historical exploration.
Installation
With pip:
python -m pip install pytest-park
With uv:
uv add --group test pytest-park
Usage
Step 1 — Run your tests
pytest
After the normal pytest-benchmark tables, a pytest-park summary section is printed automatically. It compares the current run against the latest saved benchmark artifact found in pytest-benchmark storage. No extra arguments are needed.
The plugin is registered automatically via the
pytest11entry point whenpytest-parkis installed — noconftest.pychanges are required.
Step 2 — Save runs to build a history (optional)
# Save and keep comparing against the latest saved run automatically
pytest --benchmark-autosave
# Save with a meaningful name for a stable reference point
pytest --benchmark-save baseline
# Compare against a specific saved run
pytest --benchmark-compare=0001
pytest --benchmark-compare=8d530304
# Save a candidate and compare it against a specific baseline
pytest --benchmark-save candidate-v2 --benchmark-compare=0001
pytest-park reuses the baseline that pytest-benchmark resolves from its configured storage — it does not require a second format. --benchmark-storage is respected as usual.
VS Code Test Explorer: if the run looks like a single-shot execution (benchmark timing disabled or reduced),
pytest-parkprints a warning so the output is not mistaken for a real comparison.
Name normalization and grouping (optional)
If your benchmark names encode variant postfixes (e.g. test_func_orig, test_func_ref, test_func_np, test_func_pt), add the pytest_benchmark_group_stats hook to group and label variants together:
# tests/conftest.py
from pytest_park.pytest_benchmark import default_pytest_benchmark_group_stats
def pytest_benchmark_group_stats(config, benchmarks, group_by):
return default_pytest_benchmark_group_stats(
config,
benchmarks,
group_by,
original_postfix="_orig", # or a list: ["_np", "_numpy"]
reference_postfix="_ref", # or a list: ["_pt", "_torch"]
group_values_by_postfix={
"orig": "original", # leading underscores are stripped for matching
"ref": "reference",
},
)
This stores parsed parts in extra_info["pytest_park_name_parts"] (base_name, parameters, postfix) and groups paired variants under the same row in the comparison table.
Multiple postfixes can be specified as a list or comma-separated string. Postfix matching is underscore-agnostic: "_original", "original", and "__original" all match the same postfix.
CLI postfix options
pytest-park registers --benchmark-original-postfix and --benchmark-reference-postfix automatically. These accept comma-separated values and override any postfixes passed directly to default_pytest_benchmark_group_stats:
# Single postfix
pytest --benchmark-original-postfix="_original" --benchmark-reference-postfix="_new"
# Multiple postfixes (comma-separated)
pytest --benchmark-original-postfix="_np,_numpy" --benchmark-reference-postfix="_pt,_torch"
When postfixes are configured, three output sections are produced:
- Regression table — flat per-method comparison of the current run vs the previous saved run (requires a reference benchmark file).
- Postfix comparison table — compares original-postfix methods vs reference-postfix methods within the current run (no saved reference needed).
- Grouped comparison table — the existing detailed comparison with grouping.
Debug information (file names, postfixes, options) is always printed in the pytest-park section.
Postfixes can also be set persistently in pyproject.toml, pytest.ini, or setup.cfg so you don't have to pass them on every run:
# pyproject.toml
[tool.pytest.ini_options]
benchmark_original_postfix = "_orig,_numpy"
benchmark_reference_postfix = "_ref,_torch"
CLI flags always override ini-file values.
Custom grouping metadata (optional)
Store arbitrary metadata on a benchmark for richer grouping:
def test_compute_optimized(benchmark):
benchmark.extra_info["custom_groups"] = {
"technique": "vectorization",
"scenario": "large-batch",
}
benchmark(compute)
Group by any key with --group-by custom:technique in the CLI.
CLI — deeper analysis across saved artifacts
Use the CLI when you want to compare specific saved runs, apply advanced grouping, or include profiler data.
# Compare latest run (candidate) against second-latest run (reference)
pytest-park analyze ./.benchmarks
# Compare named runs
pytest-park analyze ./.benchmarks --reference baseline --candidate candidate-v2
# When only --candidate is given, the preceding run is used as reference
pytest-park analyze ./.benchmarks --candidate candidate-v2
# Group by benchmark group and a specific parameter
pytest-park analyze ./.benchmarks --group-by group --group-by param:device
# Group by custom metadata key
pytest-park analyze ./.benchmarks --group-by custom:scenario
# Exclude a parameter from comparison
pytest-park analyze ./.benchmarks --exclude-param device
# Keep a parameter as a separate dimension
pytest-park analyze ./.benchmarks --group-by group --distinct-param device
# Normalize method names by stripping postfixes
pytest-park analyze ./.benchmarks --original-postfix _orig --reference-postfix _ref
# Include profiler artifacts
pytest-park analyze ./.benchmarks --profiler-folder ./.profiler --group-by group
# Print installed version
pytest-park version
Grouping reference
Default precedence (when no --group-by is given): custom > benchmark_group > marks > params
| Token | Alias(es) | Resolves to |
|---|---|---|
custom:<key> |
— | extra_info["custom_groups"]["<key>"] |
custom |
custom_group |
All custom group keys combined |
group |
benchmark_group |
Benchmark group label |
marks |
mark |
Comma-joined pytest marks |
params |
— | All parameter key=value pairs |
param:<name> |
— | Value of a specific parameter |
name |
method |
Normalized method name |
fullname |
nodeid |
Full test node path |
Multiple --group-by tokens can be combined; the resulting label is joined with |.
Artifact folder expectations
- Input files are pytest-benchmark JSON files (
--benchmark-saveoutput) stored anywhere under the folder. - Default comparison: latest run as candidate, second-latest as reference.
- When only
--candidateis given, the run immediately preceding it is used as reference. - Run identity uses
metadata.run_id,metadata.tag, or fallback datetime identifiers.
Interactive dashboard
For exploratory, visual analysis across many saved runs:
pytest-park serve ./.benchmarks --reference baseline --host 127.0.0.1 --port 8080
# With profiler data
pytest-park serve ./.benchmarks --profiler-folder ./.profiler --port 8080
Access the dashboard at http://127.0.0.1:8080. Features include run selection, history charts, delta distribution, and method-level drill-down.
To launch a guided interactive CLI session instead:
pytest-park
Docs
uv run mkdocs build -f ./mkdocs.yml -d ./_build/
Update template
copier update --trust -A --vcs-ref=HEAD
Credits
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pytest_park-0.3.1.tar.gz.
File metadata
- Download URL: pytest_park-0.3.1.tar.gz
- Upload date:
- Size: 35.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9c8713481ec5f5ca6064be719dc220953bb6d7e579dd8344419326f7b6cacc8f
|
|
| MD5 |
5f4b7494e3709107922650d603afa47c
|
|
| BLAKE2b-256 |
c24cb6dbbdd3b103c31dacbcd376382e4dde719e2a3f43bd7f10a276abc5ea9f
|
File details
Details for the file pytest_park-0.3.1-py3-none-any.whl.
File metadata
- Download URL: pytest_park-0.3.1-py3-none-any.whl
- Upload date:
- Size: 41.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b227ebb966d53447db9a7196ae0e78b9790a4583459a5a9a8ba98031401c4949
|
|
| MD5 |
6c20e9aaee28a04c2d774085eb57eb5c
|
|
| BLAKE2b-256 |
523d89121e37db9aeb972066927668f396c11a8a2550cb4ebce480db12b4d58e
|