Skip to main content

Organise and analyse your pytest benchmarks

Project description

pytest-park

Build Documentation PyPI - Package Version PyPI - Python Version Docs with MkDocs uv linting: ruff ty prek security: bandit Semantic Versions Copier License

Organise and analyse your pytest benchmarks

Features

  • Load pytest-benchmark JSON artifact folders and normalize runs, groups, marks, params, and custom grouping metadata.
  • Compare reference runs against candidate runs over time with per-case and per-group delta summaries.
  • Build custom grouping views with precedence across custom groups, benchmark groups, marks, and params.
  • Associate optional profiler artifacts with benchmark runs for code-level analysis context.
  • Serve an interactive local NiceGUI dashboard for exploratory benchmark comparison.

Installation

With pip:

python -m pip install pytest-park

With uv:

uv add --group test pytest-park

How to use it

Recommended default workflow

For most projects, the recommended setup is:

  1. add pytest_park.pytest_plugin to your test suite,
  2. run benchmarked unit tests with pytest, and
  3. read the pytest-park summary printed in the test output.

Use pytest-park analyze or pytest-park serve when you want more specific historical analysis across saved benchmark artifacts.

# Print version
pytest-park version

# Analyze and compare latest run (candidate) against second-latest run (reference)
pytest-park analyze ./.benchmarks --group-by group --group-by param:device

# Compare a named candidate run against a named reference tag/run id
pytest-park analyze ./.benchmarks --reference reference --candidate candidate-v2 --group-by custom:scenario

# When only --candidate is given, the run immediately before it in the list is used as reference
pytest-park analyze ./.benchmarks --candidate candidate-v2

# Exclude specific parameters from the comparison
pytest-park analyze ./.benchmarks --exclude-param device

# Keep a parameter distinct (not collapsed) during grouping
pytest-park analyze ./.benchmarks --group-by group --distinct-param device

# Normalize method names by stripping configured postfixes
pytest-park analyze ./.benchmarks --original-postfix _orig --reference-postfix _ref

# Include profiler artifacts alongside benchmark data
pytest-park analyze ./.benchmarks --profiler-folder ./.profiler --group-by group

# Launch interactive dashboard
pytest-park serve ./.benchmarks --reference reference --original-postfix _orig --reference-postfix _ref --host 127.0.0.1 --port 8080

# Launch dashboard with profiler data
pytest-park serve ./.benchmarks --profiler-folder ./.profiler --host 127.0.0.1 --port 8080

# Start interactive mode (no arguments) when you specifically want guided CLI analysis or dashboard startup
pytest-park

Benchmark folder expectations

  • Input artifacts are pytest-benchmark JSON files (--benchmark-save output) stored anywhere under a folder.
  • Reference selection uses explicit run id or tag metadata (metadata.run_id, metadata.tag, or fallback identifiers).
  • Default comparison baseline is latest run (candidate) vs second-latest run (reference) when --reference and --candidate are both omitted.
  • When only --candidate is provided, the run immediately preceding it in the list is used as the reference.
  • Grouping defaults to: custom groups > benchmark group > marks > params.
  • Grouping tokens for --group-by (alias for --grouping): custom:<key>, custom (all custom keys), group / benchmark_group, mark / marks, params, param:<name>, name / method, fullname / nodeid.
  • Use --distinct-param to treat a parameter as a separate dimension rather than collapsing it during grouping.
  • Method normalization supports optional --original-postfix and --reference-postfix to align benchmark names across implementations.
  • Profiler artifacts can be linked via --profiler-folder (both analyze and serve subcommands).

Recommended pytest workflow: enable the plugin and read the summary

To print inline comparisons against the latest saved pytest-benchmark run, opt in to the pytest plugin from your top-level conftest.py (or another top-level pytest plugin module):

# tests/conftest.py
pytest_plugins = ["pytest_park.pytest_plugin"]

With that plugin enabled:

  • pytest becomes the default way to use pytest-park during normal development.
  • pytest will automatically compare each current benchmark against the latest saved run found in pytest-benchmark storage.
  • pytest --benchmark-compare keeps using pytest-benchmark storage selection, so you can target a specific saved baseline when needed.
  • pytest --benchmark-save NAME or pytest --benchmark-autosave are only needed if you also want to persist the current run as a future baseline.
  • Benchmark comparison output is emitted as a dedicated pytest-park terminal summary section after the pytest-benchmark tables, using the same comparison table shown by the CLI.
  • When tests are run from the VS Code Python Test Explorer, that summary section is still shown in the test run output.
  • If the run looks like VS Code's default single-shot benchmark execution, pytest-park prints a warning so the output is not mistaken for a real benchmark comparison.

In short: enable the plugin once, run your benchmarked unit tests, and read the pytest-park section in the test output. Use the CLI and dashboard only when you need deeper or more targeted analysis.

How --benchmark-compare works with pytest-park

pytest-park does not invent a second baseline format here. It reuses the baseline that pytest-benchmark resolves from its configured storage.

That means the following commands keep the usual pytest-benchmark meaning, while also powering inline pytest-park comparison output:

# Compare against the latest saved benchmark run automatically
pytest

# Compare against the latest saved benchmark run in storage
pytest --benchmark-compare

# Compare against a specific saved run number or id/prefix
pytest --benchmark-compare=0001
pytest --benchmark-compare=8d530304

# Save the current run and compare it against a chosen baseline in the same invocation
pytest --benchmark-save candidate-v2 --benchmark-compare=0001

Behavior summary:

  • Registering pytest_plugins = ["pytest_park.pytest_plugin"] is enough to enable inline comparison output.
  • With no extra benchmark arguments, pytest-park uses the latest saved benchmark run from the configured storage as the baseline.
  • --benchmark-compare with no value means "compare against the latest saved run".
  • --benchmark-compare=<value> means "compare against the saved run selected by pytest-benchmark storage".
  • If you also pass --benchmark-save or --benchmark-autosave, the current run is still saved normally after execution.
  • If you do not save the current run, pytest-park can still print inline comparison output for the current session; it just will not persist that run as a future baseline.
  • Baseline lookup follows --benchmark-storage, so if you point pytest-benchmark at a different storage location, pytest-park will compare against that same location.

In practice, use:

  • --benchmark-autosave when you want a rolling "compare against latest" workflow.
  • --benchmark-compare=<saved-id> when you want to pin comparisons to a known historical baseline.
  • --benchmark-save <name> --benchmark-compare=<baseline> when you want both a stable reference and a newly saved candidate artifact.

If your benchmark method names encode postfixes and parameter segments, you can override pytest_benchmark_group_stats using the helper from this package:

# tests/conftest.py
from pytest_park.pytest_benchmark import default_pytest_benchmark_group_stats


def pytest_benchmark_group_stats(config, benchmarks, group_by):
	return default_pytest_benchmark_group_stats(
		config,
		benchmarks,
		group_by,
		original_postfix="_orig",
		reference_postfix="_ref",
		group_values_by_postfix={
			"_orig": "original",
			"_ref": "reference",
			"none": "unlabeled",
		},
	)

This stores parsed parts in extra_info["pytest_park_name_parts"] with base_name, parameters, and postfix.

If you use postfixes in benchmark names, expose matching pytest-benchmark options in the same conftest.py:

def pytest_addoption(parser):
	parser.addoption("--benchmark-original-postfix", action="store", default="")
	parser.addoption("--benchmark-reference-postfix", action="store", default="")

Docs

uv run mkdocs build -f ./mkdocs.yml -d ./_build/

Update template

copier update --trust -A --vcs-ref=HEAD

Credits

This project was generated with 🚀 python project template.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_park-0.2.0.tar.gz (30.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytest_park-0.2.0-py3-none-any.whl (33.7 kB view details)

Uploaded Python 3

File details

Details for the file pytest_park-0.2.0.tar.gz.

File metadata

  • Download URL: pytest_park-0.2.0.tar.gz
  • Upload date:
  • Size: 30.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for pytest_park-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b7666824ffa23b9ebc28cda8995fa383066943e88517730ddb13dfca9d051c15
MD5 9129a4db613600de1006c55acfefa722
BLAKE2b-256 fec2b3c97866208d3c32860e040cd2c359e765d4fc27b68527ae6d1da83d3ab7

See more details on using hashes here.

File details

Details for the file pytest_park-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: pytest_park-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 33.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.10 {"installer":{"name":"uv","version":"0.10.10","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for pytest_park-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 573041bcf69d45b22cfb47132ed53ea935d85dca8b3f6784b8f696fbd80dc682
MD5 22ef075b2118a135e3384e25ab5dde2f
BLAKE2b-256 22f8d6928a9e591edeeadabd907c436d551548d0cdde783f24a3e82820edcc29

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page