Skip to main content

A plugin that allows users to create and use custom outputs instead of the standard Pass and Fail

Project description

Enhance Your Pytest Reporting with Customizable Test Outputs

Tired of the standard pass/fail binary in pytest? With pytest-custom_outputs, you can create expressive and informative custom test results that go beyond the ordinary. Tailor your reports to provide deeper insights into your test scenarios.

Useful for if you want more than just the default Pass, Fail, and Skip outcomes.

Features

  • Flexible Output Types: Define new outcome types like "unimplemented", "soft_fail"," "inconclusive," or any custom label that suits your testing needs.
  • Fully Customizeable: Custom outputs are customizable in their name, description, result code, tag, and color.
  • Seamless Integration: Easily incorporate custom outputs into your existing pytest test suites.
  • Terminal and File Reporting: View your custom outputs in both your terminal output and pytest file reports.

Installation

pip install pytest-custom_outputs

Usage

In the directory where you will be running your pytest, create a file called "pytest_custom_outputs.json". You will use this file to create your own custom outputs. Feel free to copy and paste the below json file into yours and edit from there.

EXAMPLE FILE:

{
        "use_unknown_if_no_match": true,
        "unknown": {
                "attribute":"_unknown",
                "status": {
                        "desc":"unknown",
                        "code":"?",
                        "output": {
                                "tag":"UNKNOWN",
                                "color":"purple"
                        }
                }
        },
        "custom_outputs": {
                "Pass_with_exception": {
                        "attribute":"_expected_pass",
                        "status": {
                                "desc":"passed_with_exception",
                                "code":"P",
                                "output": {
                                        "tag":"XPASSED",
                                        "color":"green"
                                }
                        }
                },
                "Fatal_failed": {
                        "attribute":"_fatal_fail",
                        "status": {
                                "desc":"fatal_failed",
                                "code":"!",
                                "output": {
                                        "tag":"FAILED",
                                        "color":"red"
                                }
                        }
                },
                "Not_available": {
                        "attribute":"_not_available",
                        "status": {
                                "desc":"not_available",
                                "code":"N",
                                "output": {
                                        "tag":"NOT_AVAILABLE",
                                        "color":"blue"
                                }
                        }
                },
                "Failed_but_proceed": {
                        "attribute":"_fail_but_proceed",
                        "status": {
                                "desc":"failed_but_proceed",
                                "code":"X",
                                "output": {
                                        "tag":"FAILED_BUT_PROCEED",
                                        "color":"red"
                                }
                        }
                },
                "Unimplemented": {
                        "attribute":"_unimplemented",
                        "status": {
                                "desc":"unimplemented",
                                "code":"U",
                                "output": {
                                        "tag":"UNIMPLEMENTED",
                                        "color":"yellow"
                                }
                        }
                },
                "Skipped": {
                        "attribute":"_skipped",
                        "status": {
                                "desc":"skipped",
                                "code":"S",
                                "output": {
                                        "tag":"SKIPPED",
                                        "color":"yellow"
                                }
                        }
                }
        }
}

use_unknown_if_no_match

  • If True, use the unknown output below if there is no match. Otherwise, use standard skip

unknown

  • The output to use if a test's result is not in default or custom outputs

custom_outputs

  • A dictionary with all the custom outputs you write inside of it. You can edit, delete, and add new outputs here.

Each custom output is denoted by a name. The name is also the key for that output For example, in the above example file, "Pass_with_exception" and "Fatal_failed" are the names for their respective output. Names are also how we determine the result of a test case. This is done by using skip followed by the name in the parameter.

For example:

import pytest
from pytest import skip

def test_1():
    skip("Pass_with_exception")

In the example above, test_1 will result in "passed_with_exception". Because the name overrides the outcome, it will not result in a skip. We use the keyword skip as a means to obtaining our desired outcome.

If we put a name that is not in our custom output in the skip parameter, then the following occurs:

  • if we set unknown to True in the json, we will use the unknown outcome
  • else we will use the default skip and pass the name as a message (Standard skip behavior)

The rest of the information in the json file can be edited and customized to your liking.

Why pytest-custom_outputs?

  • Improved Communication: Get more informative insights from your test runs
  • Focus on Key Areas: Prioritize test cases that require attention
  • Tailored for Your Needs: Adapt outcomes and messages to your project's specific requirements

Contributing

Contributions are very welcome. Tests can be run with tox_, please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the BSD-3_ license, "pytest-custom_outputs" is free and open source software

Issues

If you encounter any problems, please file an issue_ along with a detailed description.

.. _file an issue: https://github.com/MichaelE55/pytest-custom_outputs/issues

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_custom_outputs-0.1.2.tar.gz (6.3 kB view details)

Uploaded Source

Built Distribution

pytest_custom_outputs-0.1.2-py3-none-any.whl (6.5 kB view details)

Uploaded Python 3

File details

Details for the file pytest_custom_outputs-0.1.2.tar.gz.

File metadata

  • Download URL: pytest_custom_outputs-0.1.2.tar.gz
  • Upload date:
  • Size: 6.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.8.10

File hashes

Hashes for pytest_custom_outputs-0.1.2.tar.gz
Algorithm Hash digest
SHA256 afc85de2700028e177412e3a1dac57552257ac119971cbd16d04000f7eae3940
MD5 6d01bc367d2e098939facafdb3362245
BLAKE2b-256 96002a964b9f51a505dd11b2ff93a5fd1ceb5ecabd164cb912d9abf87c544dea

See more details on using hashes here.

File details

Details for the file pytest_custom_outputs-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_custom_outputs-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3002083fa7d7fe36e35ce108e99ff655d4c7711c7aff847acceb2e8fdbfce495
MD5 e20bdd0b4cded0477e06a83f41013a83
BLAKE2b-256 a661cfb2a6697a338978c9b7f92a6f5b085725abcc44b60ea35e0e25b2ce66a5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page