Skip to main content

Assertion/verification library to aid testing with the minimal required dependencies + the ability to opt in for the others

Project description

ApprovalTests.Python

Contents

Capturing Human Intelligence - ApprovalTests is an open source assertion/verification library to aid testing.
approvaltests is the ApprovalTests port for Python.

For more information see: www.approvaltests.com.

PyPI version Python versions Build Status Build Status Build Status

What can I use ApprovalTests for?

You can use ApprovalTests to verify objects that require more than a simple assert including long strings, large arrays, and complex hash structures and objects. ApprovalTests really shines when you need a more granular look at the test failure. Sometimes, trying to find a small difference in a long string printed to STDOUT is just too hard!
ApprovalTests solves this problem by providing reporters which let you view the test results in one of many popular diff utilities.

Getting Started

What Are Approvals

If you need to gain a better understanding or are new to this concept, start here.

New Projects

If you are starting a new project, we suggest you use the Starter Project. You can just clone this and go. It's great for exercises, katas, and green field projects.

Minimal Example Tutorial

If this is first time approvaltesting in python, consider starting here: Minimal Example Tutorial

Adding to Existing Projects

From pypi:

pip install approvaltests

Overview

Approvals work by comparing the test results to a golden master. If no golden master exists you can create a snapshot of the current test results and use that as the golden master. The reporter helps you manage the golden master.
Whenever your current results differ from the golden master, Approvals will launch an external application for you to examine the differences. Either you will update the master because you expected the changes and they are good, or you will go back to your code and update or roll back your changes to get your results back in line with the golden master.

Example using pytest

from approvaltests.approvals import verify


def test_simple():
    result = "Hello ApprovalTests"
    verify(result)

snippet source | anchor

Install the plugin pytest-approvaltests and use it to select a reporter:

pip install pytest-approvaltests
pytest --approvaltests-use-reporter='PythonNative'

The reporter is used both to alert you to changes in your test output, and to provide a tool to update the golden master. In this snippet, we chose the 'PythonNative' reporter when we ran the tests. For more information about selecting reporters see the documentation

Example using unittest

import unittest

from approvaltests.approvals import verify


class GettingStartedTest(unittest.TestCase):
    def test_simple(self):
        verify("Hello ApprovalTests")


if __name__ == "__main__":
    unittest.main()

snippet source | anchor

This example has the same behaviour as the pytest version, but uses the built-in test framework unittest instead.

Example using CLI

You can invoke a verify() call from the command line. This allows invoking python approvals from any other stack via subprocesses.

Usage

python -m approvaltests --test-id hello --received "hello world!"

or

python -m approvaltests -t hello -r "hello world!"

or

echo "hello world!" | python -m approvaltests -t hello

Argument Definitions

  • --test-id or -t: Test identifier used to name the approved.txt and received.txt files for the test.

  • --received or -r: The output of the program under test (a string) that is passed to the verify method.

    • stdin: Instead of providing a received argument, you may use stdin.

Reporters

Selecting a Reporter

All verify functions take an optional options parameter that can configure reporters (as well as many other aspects).

ApprovalTests.Python comes with a few reporters configured, supporting Linux, Mac OSX, and Windows.

In the example shown below, we pass in an options with a reporter we're selecting directly:

class TestSelectReporterFromClass(unittest.TestCase):
    def test_simple(self):
        verify("Hello", options=Options().with_reporter(report_with_beyond_compare()))

snippet source | anchor

You can also use the GenericDiffReporterFactory to find and select the first diff utility that exists on our system.

An advantage of this method is you can modify the reporters.json file directly to handle your unique system.

class TestSelectReporter(unittest.TestCase):
    def setUp(self):
        self.factory = GenericDiffReporterFactory()

    def test_simple(self):
        verify(
            "Hello", options=Options().with_reporter(self.factory.get("BeyondCompare"))
        )

snippet source | anchor

Or you can build your own GenericDiffReporter on the fly

class GettingStartedTest(unittest.TestCase):
    def test_simple(self):
        verify(
            "Hello",
            options=Options().with_reporter(
                GenericDiffReporter.create(r"C:\my\favorite\diff\utility.exe")
            ),
        )

snippet source | anchor

As long as C:/my/favorite/diff/utility.exe can be invoked from the command line using the format utility.exe file1 file2 then it will be compatible with GenericDiffReporter. Otherwise you will have to derive your own reporter, which we won't cover here.

JSON file for collection of reporters

To wrap things up, I should note that you can completely replace the collection of reporters known to the reporter factory by writing your own JSON file and loading it.

For example if you had C:/myreporters.json

[
    ["BeyondCompare4", "C:/Program Files (x86)/Beyond Compare 4/BCompare.exe"],
    ["WinMerge", "C:/Program Files (x86)/WinMerge/WinMergeU.exe"],
    ["Tortoise", "C:/Program Files (x86)/TortoiseSVN/bin/tortoisemerge.exe"]
]

You could then use that file by loading it into the factory:

import unittest

from approvaltests.approvals import verify
from approvaltests.reporters.generic_diff_reporter_factory import GenericDiffReporterFactory


class GettingStartedTest(unittest.TestCase):
    def setUp(self):
        factory = GenericDiffReporterFactory()
        factory.load('C:/myreporters.json')
        self.reporter = factory.get_first_working()

    def test_simple(self):
        verify('Hello', self.reporter)

if __name__ == "__main__":
    unittest.main()

Of course, if you have some interesting new reporters in myreporters.json then please consider updating the reporters.json file that ships with Approvals and submitting a pull request.

Support and Documentation

Missing Documentation?

If there is documentation you wish existed, please add a page request to this issue.

Dependencies

ApprovalTests require Python 3.8 or greater and the following dependencies:

Required dependencies

These dependencies are always required for approvaltests

pytest>=4.0.0
empty-files>=0.0.3
typing_extensions>=3.6.2

snippet source | anchor

Extra dependencies

These dependencies are required if you are going to use the related functionality If you want the bare minimum you can use the pypi project approvaltests-minimal

pyperclip>=1.5.29     # For Clipboard Reporter
beautifulsoup4>=4.4.0 # For verify_html
allpairspy>=2.1.0     # For PairwiseCombinations
mrjob>=0.7.4          # For MrJob
testfixtures >= 7.1.0 # For verify_logging
mock >= 5.1.0         # For verify_logging

snippet source | anchor

For developers

Weekly Ensemble

The best way to contribute is to join our weekly mob/ensemble

Pull Requests

Pull requests are welcomed, particularly those accompanied by automated tests.

To run the self-tests: ./run_tests.sh

This will run the self-tests on several python versions. We support python 3.8 and above.

All pull requests will be pre-checked using GitHub actions to execute all these tests. You can see the results of test runs here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

approvaltests_minimal-14.1.0.tar.gz (58.5 kB view details)

Uploaded Source

Built Distribution

approvaltests_minimal-14.1.0-py3-none-any.whl (73.8 kB view details)

Uploaded Python 3

File details

Details for the file approvaltests_minimal-14.1.0.tar.gz.

File metadata

File hashes

Hashes for approvaltests_minimal-14.1.0.tar.gz
Algorithm Hash digest
SHA256 bb71450a1a2c9e093a5dc6f5303c5ef4281b49df13b7753e6da490e12fb74f42
MD5 567de7b8fe90e3d30df366f8ef9fe942
BLAKE2b-256 298b55cc0da3a91831517de75bddfc6beb1a3bd02c8ee4ddb7fe169c83a25379

See more details on using hashes here.

File details

Details for the file approvaltests_minimal-14.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for approvaltests_minimal-14.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f512d6a1776489fc74fc34eed4a2a869b1f029312ff17e820dfb1389a16cb3b2
MD5 5cf25c745c1501c5581328555f7ff83c
BLAKE2b-256 9969213b4af614119c1b062121c44ed1760f80dbde0ec540c7a65293b8a3c688

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page