Skip to main content

A performance plugin for pytest

Project description

pytest-performancetotal

With this plugin for pytest, which complements the playwright-pytest integration, you can seamlessly incorporate performance analysis into your test flows. It’s designed to work with UI interactions, API calls, or a combination of both, providing a straightforward method for measuring response times and pinpointing potential performance issues within your application. By leveraging this data, you can make strategic decisions to optimize and enhance your application’s performance. For insights into the original concept and additional details, refer to the article on the Node.js version of this plugin.

Installation

$ pip install pytest-performancetotal

Usage

To use pytest-performancetotal, simply add the performancetotal fixture to the test method. This will include the performance functionality in your test. No further setup is required. Here's an example:

import pytest

@pytest.mark.parametrize("iteration", [1, 2, 3])
def test_features(performancetotal, iteration):
    performancetotal.sample_start("feature1")
    time.sleep(1)
    performancetotal.sample_end("feature1")
    
    performancetotal.sample_start("feature2")
    time.sleep(0.5)
    performancetotal.sample_end("feature2")

You can also get immediate time span for a single sample inside a test:

feature1_timespan = performancetotal.get_sample_time("feature1")

be aware that get_sample_time returns a single measurement with no statistical analysis.

To use type hints follow this example:

from pytest_performancetotal.performance import Performance

def test_features(performancetotal: Performance, iteration):
            # ... your test code here

Options

--performance-noappend

To disable appending new results into existing file and start fresh every run use:

pytest --performance-noappend

⚠️ Caution:

This action will delete all your performance data permanently. Ensure that you have a backup before proceeding.

--performance-drop-failed-results

To drops results for failed tests use:

pytest --performance-drop-failed-results

--performance-recent-days

To set the umber of days to consider for performance analysis use:

pytest --performance-recent-days=7 or use day portion like: pytest --performance-recent-days=0.5

--performance-results-dir

Set a custom results directory name/path:

On WIndows:

pytest --performance-results-dir=results\01012025

or

pytest --performance-results-dir=myCustomDir

On Linux:

pytest --performance-results-dir=results/01012025

or

pytest --performance-results-dir=myCustomDir

--performance-results-file

Set a custom results file name:

pytest --performance-results-file=myCustomFile

Configuring Logging in pytest.ini

This plugin uses the native Python logging module to provide detailed logs during its execution. To ensure you can see these logs during testing, proper configuration is needed. The following instructions will guide you on how to configure pytest to output log messages to the console. This setup is particularly useful for debugging and tracking the behavior of your code.

Steps to Configure Logging:

Create or Update pytest.ini: If you do not already have a pytest.ini file, create one in the root directory of your project. If you have one, open it for editing.

For example add the following configuration in file pytest.ini:

[pytest]
log_cli = true
log_cli_level = DEBUG # or INFO|WARNING|ERROR|CRITICAL
log_cli_format = %(asctime)s - %(name)s - %(levelname)s - %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S

log_cli: Enables logging to the console.

log_cli_level: Sets the logging level. You can choose from DEBUG, INFO, WARNING, ERROR, or CRITICAL.

log_cli_format: Defines the format of the log messages.

log_cli_date_format: Specifies the date format used in log messages.

Getting the results

A new directory named performance_results is created inside your project's root folder (unless you use a custom directory name). Once all the tests are completed, two files are created inside the performance-results directory: results.json and results.csv. The analyzed data includes average time, standard error of mean (SEM), number of samples, minimum value, maximum value, earliest time, and latest time. The results table is also printed to the terminal log.

Analyzing performance data in bulk

To analyze existing performance data in bulk without generating new tests, it is recommended to use the performancetotal-cli tool. Although it is necessary to have Node.js, the tool can handle the results generated by pytest-performancetotal.

Support

For any questions or suggestions contact me at: tzur.paldi@outlook.com

📬 Maintained by Tzur Paldi — explore my GitHub profile for more tools.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_performancetotal-1.0.0.tar.gz (16.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pytest_performancetotal-1.0.0-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file pytest_performancetotal-1.0.0.tar.gz.

File metadata

  • Download URL: pytest_performancetotal-1.0.0.tar.gz
  • Upload date:
  • Size: 16.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for pytest_performancetotal-1.0.0.tar.gz
Algorithm Hash digest
SHA256 8b622c3ab1af597bcee4b4c9235e05a12ba8e354905b60436d980efa9df130a3
MD5 402f4b8f8585ecb2f690f2130a9e2fa8
BLAKE2b-256 589ea90edec8cd2d9f99e1fa180e6f9e54fcc7024f9914299c8af70029722fcf

See more details on using hashes here.

File details

Details for the file pytest_performancetotal-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_performancetotal-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fdb915ca1c347e5da426e23b09b3cc79dbf42f29528f852010b6202be982d0a7
MD5 151d65c41f9ec4611334bf50ea014549
BLAKE2b-256 ef75c49b75382cfd4ef3e091e155c2a023a53e0fff4f2a58f6fae930efb62586

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page