Organise and analyse your pytest benchmarks
Project description
pytest-park
Organise and analyse your pytest benchmarks
Features
- Load pytest-benchmark JSON artifact folders and normalize runs, groups, marks, params, and custom grouping metadata.
- Compare reference runs against candidate runs over time with per-case and per-group delta summaries.
- Build custom grouping views with precedence across custom groups, benchmark groups, marks, and params.
- Associate optional profiler artifacts with benchmark runs for code-level analysis context.
- Serve an interactive local NiceGUI dashboard for exploratory benchmark comparison.
Installation
With pip:
python -m pip install pytest-park
With uv:
uv add --group test pytest-park
How to use it
Recommended default workflow
For most projects, the recommended setup is:
- add
pytest_park.pytest_pluginto your test suite, - run benchmarked unit tests with
pytest, and - read the
pytest-parksummary printed in the test output.
Use pytest-park analyze or pytest-park serve when you want more specific historical analysis across saved benchmark artifacts.
# Print version
pytest-park version
# Analyze and compare latest run (candidate) against second-latest run (reference)
pytest-park analyze ./.benchmarks --group-by group --group-by param:device
# Compare a named candidate run against a named reference tag/run id
pytest-park analyze ./.benchmarks --reference reference --candidate candidate-v2 --group-by custom:scenario
# When only --candidate is given, the run immediately before it in the list is used as reference
pytest-park analyze ./.benchmarks --candidate candidate-v2
# Exclude specific parameters from the comparison
pytest-park analyze ./.benchmarks --exclude-param device
# Keep a parameter distinct (not collapsed) during grouping
pytest-park analyze ./.benchmarks --group-by group --distinct-param device
# Normalize method names by stripping configured postfixes
pytest-park analyze ./.benchmarks --original-postfix _orig --reference-postfix _ref
# Include profiler artifacts alongside benchmark data
pytest-park analyze ./.benchmarks --profiler-folder ./.profiler --group-by group
# Launch interactive dashboard
pytest-park serve ./.benchmarks --reference reference --original-postfix _orig --reference-postfix _ref --host 127.0.0.1 --port 8080
# Launch dashboard with profiler data
pytest-park serve ./.benchmarks --profiler-folder ./.profiler --host 127.0.0.1 --port 8080
# Start interactive mode (no arguments) when you specifically want guided CLI analysis or dashboard startup
pytest-park
Benchmark folder expectations
- Input artifacts are pytest-benchmark JSON files (
--benchmark-saveoutput) stored anywhere under a folder. - Reference selection uses explicit run id or tag metadata (
metadata.run_id,metadata.tag, or fallback identifiers). - Default comparison baseline is latest run (candidate) vs second-latest run (reference) when
--referenceand--candidateare both omitted. - When only
--candidateis provided, the run immediately preceding it in the list is used as the reference. - Grouping defaults to: custom groups > benchmark group > marks > params.
- Grouping tokens for
--group-by(alias for--grouping):custom:<key>,custom(all custom keys),group/benchmark_group,mark/marks,params,param:<name>,name/method,fullname/nodeid. - Use
--distinct-paramto treat a parameter as a separate dimension rather than collapsing it during grouping. - Method normalization supports optional
--original-postfixand--reference-postfixto align benchmark names across implementations. - Profiler artifacts can be linked via
--profiler-folder(bothanalyzeandservesubcommands).
Recommended pytest workflow: enable the plugin and read the summary
To print inline comparisons against the latest saved pytest-benchmark run, opt in to the
pytest plugin from your top-level conftest.py (or another top-level pytest plugin module):
# tests/conftest.py
pytest_plugins = ["pytest_park.pytest_plugin"]
With that plugin enabled:
pytestbecomes the default way to usepytest-parkduring normal development.pytestwill automatically compare each current benchmark against the latest saved run found in pytest-benchmark storage.pytest --benchmark-comparekeeps using pytest-benchmark storage selection, so you can target a specific saved baseline when needed.pytest --benchmark-save NAMEorpytest --benchmark-autosaveare only needed if you also want to persist the current run as a future baseline.- Benchmark comparison output is emitted as a dedicated
pytest-parkterminal summary section after the pytest-benchmark tables, using the same comparison table shown by the CLI. - When tests are run from the VS Code Python Test Explorer, that summary section is still shown in the test run output.
- If the run looks like VS Code's default single-shot benchmark execution,
pytest-parkprints a warning so the output is not mistaken for a real benchmark comparison.
In short: enable the plugin once, run your benchmarked unit tests, and read the pytest-park section in the test output. Use the CLI and dashboard only when you need deeper or more targeted analysis.
How --benchmark-compare works with pytest-park
pytest-park does not invent a second baseline format here. It reuses the baseline that
pytest-benchmark resolves from its configured storage.
That means the following commands keep the usual pytest-benchmark meaning, while also
powering inline pytest-park comparison output:
# Compare against the latest saved benchmark run automatically
pytest
# Compare against the latest saved benchmark run in storage
pytest --benchmark-compare
# Compare against a specific saved run number or id/prefix
pytest --benchmark-compare=0001
pytest --benchmark-compare=8d530304
# Save the current run and compare it against a chosen baseline in the same invocation
pytest --benchmark-save candidate-v2 --benchmark-compare=0001
Behavior summary:
- Registering
pytest_plugins = ["pytest_park.pytest_plugin"]is enough to enable inline comparison output. - With no extra benchmark arguments, pytest-park uses the latest saved benchmark run from the configured storage as the baseline.
--benchmark-comparewith no value means "compare against the latest saved run".--benchmark-compare=<value>means "compare against the saved run selected by pytest-benchmark storage".- If you also pass
--benchmark-saveor--benchmark-autosave, the current run is still saved normally after execution. - If you do not save the current run,
pytest-parkcan still print inline comparison output for the current session; it just will not persist that run as a future baseline. - Baseline lookup follows
--benchmark-storage, so if you point pytest-benchmark at a different storage location,pytest-parkwill compare against that same location.
In practice, use:
--benchmark-autosavewhen you want a rolling "compare against latest" workflow.--benchmark-compare=<saved-id>when you want to pin comparisons to a known historical baseline.--benchmark-save <name> --benchmark-compare=<baseline>when you want both a stable reference and a newly saved candidate artifact.
If your benchmark method names encode postfixes and parameter segments, you can override
pytest_benchmark_group_stats using the helper from this package:
# tests/conftest.py
from pytest_park.pytest_benchmark import default_pytest_benchmark_group_stats
def pytest_benchmark_group_stats(config, benchmarks, group_by):
return default_pytest_benchmark_group_stats(
config,
benchmarks,
group_by,
original_postfix="_orig",
reference_postfix="_ref",
group_values_by_postfix={
"_orig": "original",
"_ref": "reference",
"none": "unlabeled",
},
)
This stores parsed parts in extra_info["pytest_park_name_parts"] with base_name, parameters, and postfix.
If you use postfixes in benchmark names, expose matching pytest-benchmark options in the same conftest.py:
def pytest_addoption(parser):
parser.addoption("--benchmark-original-postfix", action="store", default="")
parser.addoption("--benchmark-reference-postfix", action="store", default="")
Docs
uv run mkdocs build -f ./mkdocs.yml -d ./_build/
Update template
copier update --trust -A --vcs-ref=HEAD
Credits
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pytest_park-0.2.0.tar.gz.
File metadata
- Download URL: pytest_park-0.2.0.tar.gz
- Upload date:
- Size: 30.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7666824ffa23b9ebc28cda8995fa383066943e88517730ddb13dfca9d051c15
|
|
| MD5 |
9129a4db613600de1006c55acfefa722
|
|
| BLAKE2b-256 |
fec2b3c97866208d3c32860e040cd2c359e765d4fc27b68527ae6d1da83d3ab7
|
File details
Details for the file pytest_park-0.2.0-py3-none-any.whl.
File metadata
- Download URL: pytest_park-0.2.0-py3-none-any.whl
- Upload date:
- Size: 33.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
573041bcf69d45b22cfb47132ed53ea935d85dca8b3f6784b8f696fbd80dc682
|
|
| MD5 |
22ef075b2118a135e3384e25ab5dde2f
|
|
| BLAKE2b-256 |
22f8d6928a9e591edeeadabd907c436d551548d0cdde783f24a3e82820edcc29
|