Skip to main content

A plugin that allows users to create and use custom outputs instead of the standard Pass and Fail. Also allows users to retrieve test results in fixtures.

Project description

Enhance Your Pytest Reporting with Customizable Test Outputs

Overview

Tired of the standard pass/fail binary in pytest? With pytest-custom_outputs, you can create expressive and informative custom test results that go beyond the ordinary! You can adapt outcomes and messages to your project's specific requirements and get more informative insights from your test runs. You can send messages alongside your custom outputs to better share your thoughts on each test case! This plugin is an absolute must-have for if you want more than just the default Pass, Fail, and Skip outcomes.

BUT THATS NOT ALL!

pytest-custom_outputs also provides an interface that allows you to access your test results from within your fixtures. Even if you don't want to make custom outputs, this plugin will still be useful if you want to collect your test information after each function. For example:

  • You can send an API call with the result status and message right after your test function!
  • You can attach the result status and message to any logs you wish.
  • You can pass the result status and message as an argument to whatever function you wish. The possibilities are endless when you have access to your test information after they're done.

Features

  • Flexible Output Types: Define new outcome types like "unimplemented", "soft_fail"," "inconclusive," or any custom label that suits your testing needs.
  • Fully Customizeable: Custom outputs are customizable in their name, description, result code, tag, and color.
  • Seamless Integration: Easily incorporate custom outputs into your existing pytest test suites.
  • Terminal and File Reporting: View your custom outputs in both your terminal output and pytest file reports.
  • Improved Communication: Attach messages alongside your outputs to further share details on your test results.
  • Enhanced Error-Catching: Failed test cases now automatically attach the associated error as a message to the result.
  • Retrieve Detailed Results: Access comprehensive information about each test, including the status (passed, failed, skipped, or any of your custom outputs) and the attached message from within your fixtures.

Installation

pip install pytest-custom_outputs

Setup Custom Outputs

This guide will help you on how to create and declare your own outputs.

First, in the directory where you will be running your pytest, create a file called pytest_custom_outputs.json. You will use this file to create your own custom outputs. The plugin makes a check for this filename, so it has to be named exactly as above. Feel free to copy and paste the below json file into yours and edit from there.

pytest_custom_outputs.json

{
        "custom_outputs": {
                "Pass_with_exception": {
                        "desc":"passed_with_exception",
                        "code":"P",
                        "tag":"XPASSED",
                        "color":"green"
                },
                "Fatal_failed": {
                        "desc":"fatal_failed",
                        "code":"!",
                        "tag":"FAILED",
                        "color":"red"
                },
                "Not_available": {
                        "desc":"not_available",
                        "code":"N",
                        "tag":"NOT_AVAILABLE",
                        "color":"blue"
                },
                "Failed_but_proceed": {
                        "desc":"failed_but_proceed",
                        "code":"X",
                        "tag":"FAILED_BUT_PROCEED",
                        "color":"red"
                },
                "Unimplemented": {
                        "desc":"unimplemented",
                        "code":"U",
                        "tag":"UNIMPLEMENTED",
                        "color":"yellow"
                },
                "Skipped": {
                        "desc":"skipped",
                        "code":"S",
                        "tag":"SKIPPED",
                        "color":"yellow"
                }
        }
}

custom_outputs - The dictionary with all the custom outputs inside of it. You can edit, delete, and add new outputs here.

"Pass_with_exception" - An example custom output. When you want to assert this outcome in your test, you call it by this name.

desc - Description of the custom output. This is what gets outputted when pytest ends.

code - The custom output's code. This is what gets outputted when a specific test ends.

tag - The tag associated with the custom output.

color - The color of the custom output.

You can add to or edit this file as much as you want to have it conform to your testing needs.

Use Custom Outputs

Once you've finished setting up your custom outputs in the previous section, you can start using them in your tests.

Use the provided c_assert function to return your output. c_assert takes one required parameter (status) and one optional parameter (message). In the status field, you write down the name of the output you wish to return. In the optional message field, you can add a message to accompany your status.

example_1.py

import pytest
from pytest_custom_outputs import c_assert

def test_1():
    c_assert("Pass_with_exception")

In the example above, test_1 will result in "passed_with_exception".

example_2.py

import pytest
from pytest_custom_outputs import c_assert

def test_2():
    c_assert("Failed_but_proceed", "The test failed section X but continue anyways")

In the example above, test_2 will result in "failed but proceed" and also accompany the message alongside it.

If we put a name that is not in our custom output in the c_assert parameter, then it will assert the unknown outcome Because of this, it is recommended to not make a custom output with the name unknown

The rest of the information in the json file can be edited and customized to your liking.

Access Results

This feature works regardless of whether or not you use custom_outputs.

Within your fixture, use the provided get_results function to get the results from the current test. get_results takes one required parameter and that is request. get_results only works as intended in function-scoped fixtures and AFTER the yield function is called

conftest.py

import pytest
from pytest_custom_outputs import get_results

@pytest.fixture(scope='function', autouse=True)
def my_fixture(request):
    yield
    print(get_results(request))

In the example above, after each test is done, their result status gets printed, and if there is an attached message, that gets printed too. This works with all methods of returning results in pytest. If the test fails for any reason, the error will automatically attach itself to the message too. This can help with differentiating the Fail results based on the error which caused it.

Example

Using the pytest_custom_outputs.json from above

test.py

import pytest
from pytest_custom_outputs import c_assert
from pytest import skip

def test_1():
    print(r"""c_assert("Fatal_failed", "too bad it didnt work")""")
    c_assert("Fatal_failed", "too bad it didnt work")

def test_2():
    print(r"""c_assert("Unimplemented")""")
    c_assert("Unimplemented")

def test_3():
    print(r"""assert(True)""")
    assert(True)

def test_4():
    print(r"""assert(False)""")
    assert(False)

def test_5():
    print(r"""skip()""")
    skip()

def test_6():
    print(r"""x = 6/0""")
    x = 6/0

def test_7():
    print(r"""c_assert("somethingrandom")""")
    c_assert("somethingrandom")

conftest.py

import pytest
from pytest_custom_outputs import get_results

@pytest.fixture(scope='function', autouse=True)
def my_fixture(request):
    yield
    print("")
    print(get_results(request))
    print("")
    print("-----------------------------------")

Running the test:

pytest -s test.py

Output during runtime:

c_assert("Fatal_failed", "too bad it didnt work")
!
{'status': 'Fatal_failed', 'message': 'too bad it didnt work'}

-----------------------------------
c_assert("Unimplemented")
U
{'status': 'Unimplemented', 'message': ''}

-----------------------------------
assert(True)
.
{'status': 'passed', 'message': ''}

-----------------------------------
assert(False)
F
{'status': 'failed', 'message': 'AssertionError'}

-----------------------------------
skip()
s
{'status': 'skipped', 'message': ''}

-----------------------------------
x = 6/0
F
{'status': 'failed', 'message': 'ZeroDivisionError'}

-----------------------------------
c_assert("somethingrandom")
?
{'status': 'unknown', 'message': 'Unknown output (somethingrandom)'}

-----------------------------------

Output at test completion:

==================================================================== FAILURES ====================================================================
_____________________________________________________________________ test_4 _____________________________________________________________________

    def test_4():
        print(r"""assert(False)""")
>       assert(False)
E       assert False

test.py:20: AssertionError
_____________________________________________________________________ test_6 _____________________________________________________________________

    def test_6():
        print(r"""x = 6/0""")
>       x = 6/0
E       ZeroDivisionError: division by zero

test.py:28: ZeroDivisionError
============================================================ short test summary info =============================================================
FAILED test.py::test_4 - assert False
FAILED test.py::test_6 - ZeroDivisionError: division by zero
=============================== 2 failed, 1 passed, 1 skipped, 1 fatal_failed, 1 unimplemented, 1 unknown in 0.14s ===============================

Contributing

Contributions are very welcome. Tests can be run with tox_, please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the BSD-3_ license, "pytest-custom_outputs" is free and open source software

Issues

If you encounter any problems, please file an issue_ along with a detailed description.

.. _file an issue: https://github.com/MichaelE55/pytest-custom_outputs/issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_custom_outputs-1.0.0.tar.gz (10.6 kB view details)

Uploaded Source

Built Distribution

pytest_custom_outputs-1.0.0-py3-none-any.whl (8.2 kB view details)

Uploaded Python 3

File details

Details for the file pytest_custom_outputs-1.0.0.tar.gz.

File metadata

  • Download URL: pytest_custom_outputs-1.0.0.tar.gz
  • Upload date:
  • Size: 10.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.8.10

File hashes

Hashes for pytest_custom_outputs-1.0.0.tar.gz
Algorithm Hash digest
SHA256 46a170010459560d22ffe7b4a3853486c9ddf3515c84984cbfcd6bffaf71c497
MD5 6d683e542cc86d2223776fcca54d8ac6
BLAKE2b-256 6039df181ee61e5b36506574e4301d8d3085c519f701b4b75196c801a1c71026

See more details on using hashes here.

File details

Details for the file pytest_custom_outputs-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_custom_outputs-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4b5dde0d69df43ca803596b5597c84504ced212ae211247bb69601f6d988623c
MD5 f806c9dd700ac43448d80c4305966f3c
BLAKE2b-256 b4447abd2b02614f61f8c5e2fc57f1a2202b19b69abc77a2698d864c84a37801

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page