Skip to main content

Pytest plugin providing the 'run_on_pass' marker

Project description

PyPI - Latest Release PyPI - Python Versions GitHub Actions - Build Status

pytest-passrunner: A Pytest plugin that provides the ‘run_on_pass’ marker

The pytest-passrunner plugin for Pytest introduces a new marker called run_on_pass, which can be used to mark an entire test suite to only run a test if all previous tests in it passed successfully. This allows for the user to use test structures, where previous tests setup future ones. If previous tests were to fail, the future ones cannot run properly, and thus should be ‘xfailed’ (expected failure) instead of attempted anyway. See below for some examples.

Installation & Use

How to install

pytest-passrunner can be easily installed directly from PyPI with:

$ pip install pytest-passrunner

If required, one can also clone the repository and install pytest-passrunner manually:

$ git clone https://github.com/1313e/pytest-passrunner
$ cd pytest-success
$ pip install .

The run_on_pass marker can now be used in tests with pytest.mark.run_on_pass.

Example use

The run_on_pass marker, when used on a Pytest suite, marks all tests in the suite to solely run if all previous tests in the suite passed. Another way of saying this is that if a test in a marked test suite fails, all remaining tests are automatically flagged as ‘xfail’ (expected fail) and do not run.

One use-case of this marker is when you want to write a test suite, where a specific object is being modified throughout the tests. In this scenario, it is not uncommon that a particular test in the test suite can only run correctly if all previous tests passed, as otherwise the object is not in the expected state. The following illustrates this scenario:

# Import pytest
import pytest


# Define Pytest suite
class Test_list(object):
    # Create fixture that is shared between all tests in the suite
    # In this case, a list object
    @pytest.fixture(scope='class')
    def lst(self):
        # Create empty list
        lst = []

        # Return it
        return(lst)

    # Perform test that sets up this list
    def test_1(self, lst):
        # Append 1 to the list
        lst.append(1)

        # Test that 1 was appended
        assert lst == [1]

    # Perform test that requires lst to be [1]
    def test_2(self, lst):
        # Append 2 to the list
        lst.append(2)

        # Test that 1 and 2 are in the list now
        assert lst == [1, 2]

In this simple test script, a list object is being shared between all the tests in the Test_list test suite. In the second test, test_2, it is assumed that this list object was correctly set up by the first test. If test_1 were to fail for whatever reason, test_2 cannot possibly pass anymore, as the list object was never set up properly. However, test_2 (and any further tests after it) will still be ran, wasting time and cluttering the final pytest summary overview with unhelpful errors.

In order to solve this, we can apply the run_on_pass marker:

# Import pytest
import pytest


# Define Pytest suite with the 'run_on_pass' marker
@pytest.mark.run_on_pass
class Test_list(object):
    # Create fixture that is shared between all tests in the suite
    # In this case, a list object
    @pytest.fixture(scope='class')
    def lst(self):
        # Create empty list
        lst = []

        # Return it
        return(lst)

    # Perform test that sets up this list
    def test_1(self, lst):
        # Append 1 to the list
        lst.append(1)

        # Test that 1 was appended
        assert lst == [1]

    # Perform test that requires lst to be [1]
    def test_2(self, lst):
        # Append 2 to the list
        lst.append(2)

        # Test that 1 and 2 are in the list now
        assert lst == [1, 2]

If now the execution of test_1 fails, test_2 (and any further tests after it in this test suite) will not be ran and be flagged as ‘xfail’ instead. Tests that are xfailed do not generate any error messages in the pytest summary overview and are not seen as failed/errored tests either. This keeps the summary cleaner, and no time is wasted on tests that cannot pass.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest-passrunner-1.0.0.tar.gz (6.1 kB view details)

Uploaded Source

Built Distribution

pytest_passrunner-1.0.0-py3-none-any.whl (6.2 kB view details)

Uploaded Python 3

File details

Details for the file pytest-passrunner-1.0.0.tar.gz.

File metadata

  • Download URL: pytest-passrunner-1.0.0.tar.gz
  • Upload date:
  • Size: 6.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.1 CPython/3.9.1

File hashes

Hashes for pytest-passrunner-1.0.0.tar.gz
Algorithm Hash digest
SHA256 ed80c2001a45a0b3ced323f63a913ba7319274bdf961d0c08409d6048349d179
MD5 e7a3e2fdd96dcf64d092ccb5aae56532
BLAKE2b-256 e38e344aec99070ac94946ab1dd3138020c176880d530c8cb955d37f5e07917b

See more details on using hashes here.

File details

Details for the file pytest_passrunner-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: pytest_passrunner-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 6.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.1 CPython/3.9.1

File hashes

Hashes for pytest_passrunner-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4beda91498eeff54bb715d1401582fd7b856858ec752e0eac781e231e5979db6
MD5 8c0e72c35ab92a593d8786ee3312d098
BLAKE2b-256 11480effbbececb73b9e697a345730ed719d7e5103f8cd66a2cb17b6449b4544

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page