Organise and analyse your pytest benchmarks
Project description
pytest-park
Organise and analyse your pytest benchmarks
Features
- Inline benchmark comparison printed directly in
pytestoutput — no extra commands needed. - Load pytest-benchmark JSON artifact folders and normalize runs, groups, marks, params, and custom grouping metadata.
- Compare reference runs against candidate runs with per-case and per-group delta and speedup summaries.
- Flexible grouping: custom keys, benchmark groups, marks, params, and postfix-based name normalization.
- Associate optional profiler artifacts with benchmark runs for code-level context.
- Serve an interactive local NiceGUI dashboard for historical exploration.
Installation
With pip:
python -m pip install pytest-park
With uv:
uv add --group test pytest-park
Usage
Step 1 — Enable the plugin
Add one line to your top-level conftest.py:
# tests/conftest.py
pytest_plugins = ["pytest_park.pytest_plugin"]
Step 2 — Run your tests
pytest
After the normal pytest-benchmark tables, a pytest-park summary section is printed automatically. It compares the current run against the latest saved benchmark artifact found in pytest-benchmark storage. No extra arguments are needed.
Step 3 — Save runs to build a history (optional)
# Save and keep comparing against the latest saved run automatically
pytest --benchmark-autosave
# Save with a meaningful name for a stable reference point
pytest --benchmark-save baseline
# Compare against a specific saved run
pytest --benchmark-compare=0001
pytest --benchmark-compare=8d530304
# Save a candidate and compare it against a specific baseline
pytest --benchmark-save candidate-v2 --benchmark-compare=0001
pytest-park reuses the baseline that pytest-benchmark resolves from its configured storage — it does not require a second format. --benchmark-storage is respected as usual.
VS Code Test Explorer: if the run looks like a single-shot execution (benchmark timing disabled or reduced),
pytest-parkprints a warning so the output is not mistaken for a real comparison.
Name normalization and grouping (optional)
If your benchmark names encode variant postfixes (e.g. test_func_orig, test_func_ref), add the pytest_benchmark_group_stats hook to group and label variants together:
# tests/conftest.py
from pytest_park.pytest_benchmark import default_pytest_benchmark_group_stats
def pytest_benchmark_group_stats(config, benchmarks, group_by):
return default_pytest_benchmark_group_stats(
config,
benchmarks,
group_by,
original_postfix="_orig",
reference_postfix="_ref",
group_values_by_postfix={
"_orig": "original",
"_ref": "reference",
},
)
This stores parsed parts in extra_info["pytest_park_name_parts"] (base_name, parameters, postfix) and groups paired variants under the same row in the comparison table.
To expose the postfix options as pytest flags:
def pytest_addoption(parser):
parser.addoption("--benchmark-original-postfix", action="store", default="")
parser.addoption("--benchmark-reference-postfix", action="store", default="")
Custom grouping metadata (optional)
Store arbitrary metadata on a benchmark for richer grouping:
def test_compute_optimized(benchmark):
benchmark.extra_info["custom_groups"] = {
"technique": "vectorization",
"scenario": "large-batch",
}
benchmark(compute)
Group by any key with --group-by custom:technique in the CLI.
CLI — deeper analysis across saved artifacts
Use the CLI when you want to compare specific saved runs, apply advanced grouping, or include profiler data.
# Compare latest run (candidate) against second-latest run (reference)
pytest-park analyze ./.benchmarks
# Compare named runs
pytest-park analyze ./.benchmarks --reference baseline --candidate candidate-v2
# When only --candidate is given, the preceding run is used as reference
pytest-park analyze ./.benchmarks --candidate candidate-v2
# Group by benchmark group and a specific parameter
pytest-park analyze ./.benchmarks --group-by group --group-by param:device
# Group by custom metadata key
pytest-park analyze ./.benchmarks --group-by custom:scenario
# Exclude a parameter from comparison
pytest-park analyze ./.benchmarks --exclude-param device
# Keep a parameter as a separate dimension
pytest-park analyze ./.benchmarks --group-by group --distinct-param device
# Normalize method names by stripping postfixes
pytest-park analyze ./.benchmarks --original-postfix _orig --reference-postfix _ref
# Include profiler artifacts
pytest-park analyze ./.benchmarks --profiler-folder ./.profiler --group-by group
# Print installed version
pytest-park version
Grouping reference
Default precedence (when no --group-by is given): custom > benchmark_group > marks > params
| Token | Alias(es) | Resolves to |
|---|---|---|
custom:<key> |
— | extra_info["custom_groups"]["<key>"] |
custom |
custom_group |
All custom group keys combined |
group |
benchmark_group |
Benchmark group label |
marks |
mark |
Comma-joined pytest marks |
params |
— | All parameter key=value pairs |
param:<name> |
— | Value of a specific parameter |
name |
method |
Normalized method name |
fullname |
nodeid |
Full test node path |
Multiple --group-by tokens can be combined; the resulting label is joined with |.
Artifact folder expectations
- Input files are pytest-benchmark JSON files (
--benchmark-saveoutput) stored anywhere under the folder. - Default comparison: latest run as candidate, second-latest as reference.
- When only
--candidateis given, the run immediately preceding it is used as reference. - Run identity uses
metadata.run_id,metadata.tag, or fallback datetime identifiers.
Interactive dashboard
For exploratory, visual analysis across many saved runs:
pytest-park serve ./.benchmarks --reference baseline --host 127.0.0.1 --port 8080
# With profiler data
pytest-park serve ./.benchmarks --profiler-folder ./.profiler --port 8080
Access the dashboard at http://127.0.0.1:8080. Features include run selection, history charts, delta distribution, and method-level drill-down.
To launch a guided interactive CLI session instead:
pytest-park
Docs
uv run mkdocs build -f ./mkdocs.yml -d ./_build/
Update template
copier update --trust -A --vcs-ref=HEAD
Credits
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pytest_park-0.2.2.tar.gz.
File metadata
- Download URL: pytest_park-0.2.2.tar.gz
- Upload date:
- Size: 30.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a00d52586039c6e07024e01655c1c42bac1680880d04c3076433df07c64cb2af
|
|
| MD5 |
23ea2ef524976d9594092adf8637214a
|
|
| BLAKE2b-256 |
d0c905a98b7ce2ed2674f0e978ee370b23b7db128025295b2b6c4b087059334f
|
File details
Details for the file pytest_park-0.2.2-py3-none-any.whl.
File metadata
- Download URL: pytest_park-0.2.2-py3-none-any.whl
- Upload date:
- Size: 34.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.11 {"installer":{"name":"uv","version":"0.10.11","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6b93b58c27f4d68adc5b8fbcc760d2214a0070a5ac632322404321379d622028
|
|
| MD5 |
d42b9d7b7d19348644c176869f57dc2c
|
|
| BLAKE2b-256 |
d401b97a5f8634236f95fd9466820ade1db80d121d4f6d5b2051089f92c90d20
|