Skip to main content

Extends Hypothesis to add fully automatic testing of type annotated functions

Project description

hypothesis-auto - Fully Automatic Tests for Type Annotated Functions Using Hypothesis.


PyPI version Build Status codecov Join the chat at https://gitter.im/timothycrosley/hypothesis-auto License Downloads


Read Latest Documentation - Browse GitHub Code Repository


hypothesis-auto is an extension for the Hypothesis project that enables fully automatic tests for type annotated functions.

Hypothesis Pytest Auto Example

Key Features:

  • Type Annotation Powered: Utilize your function's existing type annotations to build dozens of test cases automatically.
  • Low Barrier: Start utilizing property-based testing in the lowest barrier way possible. Just run auto_test(FUNCTION) to run dozens of test.
  • py.test Compatible: Built-in compatibility with the popular py.test testing framework. This means that you can turn your automatically generated tests into individual py.test test cases with one line.
  • Scales Up: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of Hypothesis, including custom strategies per a parameter.

Installation:

To get started - install hypothesis-auto into your projects virtual environment:

pip3 install hypothesis-auto

OR

poetry add hypothesis-auto

OR

pipenv install hypothesis-auto

Usage Examples:

!!! warning In old usage examples you will see _ prefixed parameters like _auto_verify= to avoid conflicted with existing function parameters. Based on community feedback the project switched to _ suffixes, such as auto_verify_= to keep the likely hood of conflicting low while avoiding the connotation of private parameters.

Framework independent usage

Basic auto_test usage:

from hypothesis_auto import auto_test


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


auto_test(add)  # 50 property based scenerios are generated and ran against add
auto_test(add, auto_runs_=1_000)  # Let's make that 1,000

Adding an allowed exception:

from hypothesis_auto import auto_test


def divide(number_1: int, number_2: int) -> int:
    return number_1 / number_2

auto_test(divide)

-> 1012                     raise the_error_hypothesis_found
   1013
   1014         for attrib in dir(test):

<ipython-input-2-65a3aa66e9f9> in divide(number_1, number_2)
      1 def divide(number_1: int, number_2: int) -> int:
----> 2     return number_1 / number_2
      3

0/0

ZeroDivisionError: division by zero


auto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, ))

Using auto_test with a custom verification method:

from hypothesis_auto import Scenerio, auto_test


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


def my_custom_verifier(scenerio: Scenerio):
    if scenerio.kwargs["number_1"] > 0 and scenerio.kwargs["number_2"] > 0:
        assert scenerio.result > scenerio.kwargs["number_1"]
        assert scenerio.result > scenerio.kwargs["number_1"]
    elif scenerio.kwargs["number_1"] < 0 and scenerio.kwargs["number_2"] < 0:
        assert scenerio.result < scenerio.kwargs["number_1"]
        assert scenerio.result < scenerio.kwargs["number_1"]
    else:
        assert scenerio.result >= min(scenerio.kwargs.values())
        assert scenerio.result <= max(scenerio.kwargs.values())


auto_test(add, auto_verify_=my_custom_verifier)

Custom verification methods should take a single Scenerio and raise an exception to signify errors.

For the full set of parameters, you can pass into auto_test see its API reference documentation.

py.test usage

Using auto_pytest_magic to auto-generate dozens of py.test test cases:

from hypothesis_auto import auto_pytest_magic


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


auto_pytest_magic(add)

Using auto_pytest to run dozens of test case within a temporary directory:

from hypothesis_auto import auto_pytest


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


@auto_pytest()
def test_add(test_case, tmpdir):
    tmpdir.mkdir().chdir()
    test_case()

Using auto_pytest_magic with a custom verification method:

from hypothesis_auto import Scenerio, auto_pytest


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


def my_custom_verifier(scenerio: Scenerio):
    if scenerio.kwargs["number_1"] > 0 and scenerio.kwargs["number_2"] > 0:
        assert scenerio.result > scenerio.kwargs["number_1"]
        assert scenerio.result > scenerio.kwargs["number_1"]
    elif scenerio.kwargs["number_1"] < 0 and scenerio.kwargs["number_2"] < 0:
        assert scenerio.result < scenerio.kwargs["number_1"]
        assert scenerio.result < scenerio.kwargs["number_1"]
    else:
        assert scenerio.result >= min(scenerio.kwargs.values())
        assert scenerio.result <= max(scenerio.kwargs.values())


auto_pytest_magic(add, auto_verify_=my_custom_verifier)

Custom verification methods should take a single Scenerio and raise an exception to signify errors.

For the full reference of the py.test integration API see the API reference documentation.

Why Create hypothesis-auto?

I wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me.

I hope you too find hypothesis-auto useful!

~Timothy Crosley

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hypothesis-auto-1.1.0.tar.gz (7.4 kB view hashes)

Uploaded Source

Built Distribution

hypothesis_auto-1.1.0-py3-none-any.whl (7.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page