Skip to main content

Advanced execution framework for test scenarios

Project description

Test Junkie [pre-release]

Test Junkie is a classy framework for executing test scenarios.

This is a pre-release version, documentation may be incomplete and functionality of features is subject to change.

Table of content

Installation

pip install test_junkie

Features

Decorators

@Suite

Test Junkie enforces suite based test architecture. Thus all tests must be defined within a class and that class must be decorated with @Suite. See example on Test Suites.

from test_junkie.decorators import Suite

@Suite()
class LoginFunctionality:
    ...

@Suite decorator supports the following decorator properties:

  • Meta: @Suite(meta=meta(name="Suite Name"))
  • Retry: @Suite(retry=2)
  • Skip: @Suite(skip=Boolean)
  • Listeners: @Suite(listener=YourListener)
  • Rules: @Suite(rules=YourRules)

@beforeClass

This decorator will prioritize execution of a decorated function at the very beginning of a test suite. Decorated function will be executed only once at the very beginning of the test suite. Code which produces exception in the decorated function will be treated as a class failure which will mark all of the tests in the suite as ignored. On Ignore event listener will be called for each of the tests.

from test_junkie.decorators import beforeClass

...
@beforeClass()
def a_function():
    ...

@beforeClass does not support any special decorator properties.

@beforeTest

This decorator will prioritize execution of a decorated function before every test case in the suite. Decorated function will be executed once before every test case in the suite. Code which produces exception in the decorated function will be treated as a test failure/error and respective On Error or On Fail event listener will be called.

from test_junkie.decorators import beforeTest

...
@beforeTest()
def b_function():
    ...

@beforeTest does not support any special decorator properties.

@test

Test Junkie enforces suite based test architecture. Thus all tests must be defined within a class and be decorated with @test. See example on Test Suites. Code which produces exception in the decorated function will be treated as a test failure/error and respective On Error or On Fail event listener will be called. Function decorated with @afterTest will not be executed if exception is raised in a test case. On Success event listener will be called if test passes.

from test_junkie.decorators import test

...

@test()
def a_test():
    ...

@test()
def b_test():
    ...

@test decorator supports the following decorator properties:

  • Meta: @test(meta=meta(name="Test Name", known_bugs=[12345, 34567]))
  • Retry: @test(retry=2)
  • Skip: @test(skip=Boolean)
  • Parameters: @test(parameters=[1, 2, 3, 4])

@afterTest

This decorator will de-prioritize execution of a decorated function for the end of each test case in the suite. Decorated function will be executed once after every test cases in the suite. Code which produces exception in the decorated function will be treated as a test failure/error and respective On Error or On Fail event listener will be called.

from test_junkie.decorators import afterTest

...
@afterTest()
def c_function():
    ...

@afterTest does not support any special decorator properties.

@afterClass

This decorator will de-prioritize execution of a decorated function for the very end of a test suite. Decorated function will be executed only once at the very end of the test suite.

from test_junkie.decorators import afterClass

...
@afterClass()
def d_function():
    ...

@afterClass does not support any special decorator properties.

Skipping Tests/Suites

Test Junkie extends skipping functionality at the test level and at the suite level. You can use both at the same time or individually.

from test_junkie.decorators import Suite, test

@Suite()
class ExampleSuite:

    @test(skip=True)
    def a_test(self):

        assert True is False
  • Test level skip takes a boolean value, if True - test will be skipped and On Skip event listener will be called. Execution of tests will continue as usual if there are any remaining tests in the suite.
  • Test level skip can also take a function as my_function or my_function() in the earlier, it will evaluate the function prior to running the test while the later will evaluate as soon at your suite is imported anywhere in your code.
    • If your function has meta argument in the signature, Test Junkie will pass all of the test function's Meta information to it. All of this support is there in order to ensure that you have maximum flexibility to build custom business logic for skipping tests.
    • The only requirement is, function must return boolean value when evaluation completes.
from test_junkie.decorators import Suite, test

@Suite(skip=True)
class ExampleSuite:

    @test()
    def a_test(self):

        assert True is False
  • Suite level skip takes a boolean value, if True - all of the decorated functions in the suite will be skipped. On Skip event listener will NOT be called, instead On Class Skip will fire.

Retrying Tests/Suites

Test Junkie extends retry functionality at the test level and at the suite level. You can use both at the same time or individually. Code bellow uses both, test and suite, level retries.

from test_junkie.decorators import Suite, test

@Suite(retry=2)
class ExampleSuite:

    @test(retry=2)
    def a_test(self):

        assert True is False
  • Test level retry will retry the test, until test passes or retry limit is reached, immediately after the failure.
  • Suite level retry will kick in after all of the tests in the suite have been executed and there is at least one unsuccessful test. Test level retries will be honored again during the suite retry. Only unsuccessful tests will be retried.

With that said, the above test case will be retried 4 times in total.

Parameterized Tests

Test Junkie allows you to run parameterized test scenarios out of the box and it allows all data types to be used as parameters.

from test_junkie.decorators import Suite, test

@Suite()
class ExampleSuite:

    @test(parameters=[{"fruit": "apple"}, None, "blue", [1, 2, 3]])
    def a_test(self, parameter):

        assert isinstance(parameter, dict), "Expected dict. Actual data type: {actual}".format(actual=type(parameter)) 
  • Any time parameterized test is defined, the decorated function must accept parameter in the function signature.
  • If parameterized test fails and retry is used, only the parameter(s) that test failed with will be retried.

Parallel Test Execution

Test Junkie supports parallel execution out of the box. Two modes are available and both can be used at the same time.

  • Suite level threading: Allows to run N number of suites in parallel
  • Test level threading: Allows to run N number of test cases in parallel

N is the limit of threads that you want to use, it can be defined by you when you start running the tests.

If you use both modes together, it will first spawn a thread to process a test suite and then that thread will spawn more threads for the tests that are in scope of that suite. Thread that is running a suite, wont exit until all of the tests in scope of the suite have finished running.

For usage examples see Using Parallel Execution.

Test Listeners

Test Junkie allows you to define test listeners which allow to execute your own code on a specific test event. Defining listeners is optional. This feature is typically useful when building large frameworks as it allows for seamless integration for reporting, post processing of errors, calculation of test metrics, alerts, artifact collection etc.

Listeners that you want to use are defined at the suite level and are supported by the @Suite decorator. This allows flexibility to support different types of tests without having to add complexity every time you need to support a new type of test.

In order to create a test listener you need to create a new class and inherit from TestListener. After that, you can overwrite functions that you wish to support.

Following test functions can be overwritten:

Following class(suite) functions can be overwritten:

from test_junkie.listener import TestListener

class MyTestListener(TestListener):

    def __init__(self, **kwargs):

        TestListener.__init__(self, **kwargs)
    ...

On Success

On success event is triggered after test has successfully executed, that means @beforeTest (if any), @test, and @afterTest (if any) decorated functions have ran without producing an exception.

...
    def on_success(self):
        # Write your own code here
        pass 
    ...

On Fail

On failure event is triggered after test has produced AssertionError. AssertionError must be unhandled and
thrown during the code execution in functions decorated with @beforeTest (if any), @test, or @afterTest (if any). Make sure to include exception argument in the method signature, Exception object will be accessible through this argument.

...
    def on_failure(self, exception):
        # Write your own code here
        pass 
    ...

On Error

On error event is triggered after test has produced any exception other than AssertionError. Exception must be unhandled and thrown during the code execution in functions decorated with @beforeTest (if any), @test, or @afterTest (if any). Make sure to include exception argument in the method signature, Exception object will be accessible through this argument.

...
    def on_error(self, exception):
        # Write your own code here
        pass 
    ...

On Ignore

On ignore event is triggered when a function decorated with @beforeClass produces an exception. In this unfortunate event, all of the tests under that particular test suite will be marked ignored. Make sure to include exception argument in the method signature, Exception object will be accessible through this argument.

On ignore event can also be triggered when incorrect arguments are passed to the @test decorator.

...
    def on_ignore(self, exception):
        # Write your own code here
        pass 
    ...

On Skip

On skip event is triggered, well, when tests are skipped. Skip is supported by @test & @Suite function decorators. See Skipping Tests/Suites for examples. Skip event can also be triggered when Using Runner with tags.

...
    def on_skip(self):
        # Write your own code here
        pass 
    ...

On Cancel

On Cancel event is triggered sometime* after cancel() is called on the active Runner object. See Canceling test execution for more info. It is not guaranteed that this event will be called, however. Assuming that, cancel() was called while Runner is in the middle of processing a test suite, yes it will be called on all of the remaining tests that have not yet been executed. All of the previously executed tests wont be effected. Tests in the following suites wont be marked canceled neither, the suites will be "skipped" if you will but On Class Cancel will be called on all of the suites.

On Class Skip

On Class Skip event is triggered, when test suites are skipped. Skip is supported by @test & @Suite function decorators. See Skipping Tests/Suites for examples.

...
    def on_class_skip(self):
        # Write your own code here
        pass 
    ...

On Class Cancel

On Class Cancel event is triggered sometime* after cancel() is called on the active Runner object. See Canceling test execution for more info.

Event will apply only to those suites that are executed in scope of that Runner object, see Running Test Suite(s) for more info.

...
    def on_class_cancel(self):
        # Write your own code here
        pass 
    ...

On Before Class Failure

On Before Class Failure event is triggered only when a function decorated with @beforeClass produces AssertionError. Make sure to include exception argument in the method signature, Exception object will be accessible through this argument. On Ignore will also fire.

...
    def on_before_class_failure(self, exception):
        # Write your own code here
        pass 
    ...

On Before Class Error

On Before Class Error event is triggered only when a function decorated with @beforeClass produces exception other than AssertionError. Make sure to include exception argument in the method signature, Exception object will be accessible through this argument. On Ignore will also fire.

...
    def on_before_class_error(self, exception):
        # Write your own code here
        pass 
    ...

On After Class Failure

On After Class Failure event is triggered only when a function decorated with @afterClass produces AssertionError. Make sure to include exception argument in the method signature, Exception object will be accessible through this argument. No test level event listeners will be fired.

...
    def on_after_class_failure(self, exception):
        # Write your own code here
        pass 
    ...

On After Class Error

On After Class Error event is triggered only when a function decorated with @afterClass produces exception other than AssertionError. Make sure to include exception argument in the method signature, Exception object will be accessible through this argument. No test level event listeners will be fired.

...
    def on_after_class_error(self, exception):
        # Write your own code here
        pass 
    ...

Meta

All of the TestListener class instance functions have access to the test's and suite's meta information if such was passed in to the @Suite or @test decorator. Metadata can be of any data type. You can use meta to set properties such as:

  • Test name, suite name, description, expected results etc - anything that can be useful in reporting
  • Test case IDs - if you have a test management system, leverage it to link test scripts directly to the test cases and further integrations can be implemented from there
  • Bug ticket IDs - if you have a bug tracking system, leverage it to link your test case with issues that are already known and allow you to process failures in a different manner and/or allow for other integrations with the tracking system
from test_junkie.decorators import Suite, test


@Suite(listener=MyTestListener, 
       meta={"name": "Your suite name", 
             "id": 123444})
class ExampleSuite:

    @test(meta={"name": "You test name", 
                "id": 344123, 
                "known_bugs": [11111, 22222, 33333], 
                "expected": "Assertion must pass"})
    def a_test(self):

        assert True is True

Metadata that was set in the code above can be accessed in any of the event listeners like so:

from test_junkie.listener import TestListener


class MyTestListener(TestListener):

    def __init__(self, **kwargs):

        TestListener.__init__(self, **kwargs)

    def on_success(self):

        print("Suite name: {name}".format(name=self.kwargs["class_meta"]["name"]))
        print("Suite ID: {id}".format(id=self.kwargs["class_meta"]["id"]))
        print("Test name: {name}".format(name=self.kwargs["test_meta"]["name"]))
        print("Test ID: {id}".format(id=self.kwargs["test_meta"]["id"]))
        print("Expected result: {expected}".format(expected=self.kwargs["test_meta"]["expected"]))
        print("Known bugs: {bugs}".format(bugs=self.kwargs["test_meta"]["known_bugs"]))

Rules

You may have a situation where you find your self copy pasting code from one suite's @beforeClass or @beforeTest function(s) into another. Test Junkie allows you to define Rules in such cases. Rule definitions are reusable, similar to the Listeners and also supported by the @Suite decorator.

In order to create Rules, you need to create a new class and inherit from TestRules. After that, you can overwrite functions that you wish to use.

from test_junkie.rules import TestRules

class MyRules(TestRules):

    def __init__(self, **kwargs):

        TestRules.__init__(self, **kwargs)

    def before_class(self):
        # write your code here
        pass

    def before_test(self):
        # write your code here
        pass

    def after_test(self):
        # write your code here
        pass

    def after_class(self):
        # write your code here
        pass

To use the Rules you just created, reference them in the suite definition:

from test_junkie.decorators import Suite


@Suite(rules=MyRules)
class ExampleSuite:
...

Execution priority vs the Decorators:

  • before_class() will run right before the function decorated with @beforeClass.
  • before_test() will run right before the function decorated with @beforeTest.
  • after_test() will run right after the function decorated with @afterTest.
  • after_class() will run right after the function decorated with @afterClass.

Failures/Exceptions, produced inside this functions, will be treated similar to their respective Decorators.

Tags

Test Junkie allows you to tag your test scenarios. You can use the tags to run or skip test cases that match the tags when you run your tests. Following tag configurations are supported:

  • run_on_match_all - Will run test cases that match all of the tags in the list. Will trigger On Skip event for all of the tests that do not match the tags or do not have tags.
  • run_on_match_any - Will run test cases that match at least one tag in the list Will trigger On Skip event for all of the tests that do not match the tags or do not have tags.
  • skip_on_match_all - Will skip test cases that match all of the tags in the list. Will trigger On Skip event.
  • skip_on_match_any - Will skip test cases that match at least one tag in the list. Will trigger On Skip event.

All of the configs can be used at the same time. However, this is the order that will be honored:

skip_on_match_all -> skip_on_match_any -> run_on_match_all -> run_on_match_any which ever matches first will be executed or skipped.

See Using Runner with Tags for usage examples.

Examples

Test Suite

from random import randint
from test_junkie.decorators import test, Suite, beforeTest, beforeClass, afterTest, afterClass, meta
from example_package.example_listener import ExampleListener

# Listener here is optional as all of the other parameters
@Suite(listener=ExampleListener, retry=2, 
       meta=meta(suite_name="Demo Suite"))
class ExampleTestSuite(object):

    @beforeClass()
    def before_class(self):  # Functions are not restricted to any naming conventions
        print("BEFORE CLASS!")

    @beforeTest()
    def before_test(self):
        print("BEFORE TEST!")

    @afterTest()
    def after_test(self):
        print("AFTER TEST!")

    @afterClass()
    def after_class(self):
        print("AFTER CLASS!")

    # meta function is used for metadata, slightly cleaner then using a dict
    # all parameters are optional
    @test(parameters=[1, 2, 3, 4, 5], retry=2,
          meta=meta(name="Test 'A'",
                    test_id=344941,
                    known_bugs=[],
                    expected="Assertion must pass"), 
          tags=["component_a", "critical"])
    def a_test(self, parameter):  # Functions are not restricted to any naming conventions
        print("TEST 'A', param: ", parameter)
        assert randint(1, 5) == parameter, "your error message"

    # regular dict is used for metadata
    @test(meta={"name": "Test 'B'",
                "test_id": 344123,
                "known_bugs": [11111, 22222, 33333],
                "expected": "Assertion must pass"},
          tags=["component_a", "trivial", "known_failure"])
    def b_test(self):
        print("TEST 'B'")
        assert True is True

    @test(skip=True)
    def c_test(self):
        print("TEST 'C'")

Executing Test Suites

Use TestRunner.run() from test_junkie.suiterunner to run suites. run() takes in a list of suite objects.

from test_junkie.runner import Runner
from example_package.example_test_suite import ExampleTestSuite

runner = Runner([ExampleTestSuite])
runner.run()
Executing with Tags

TestRunner.run() supports tag_config keyword that defines the configuration you want to use for the tags. All of the supported configurations as well as honor priority are defined in the Tags section.

runner.run(tag_config={"run_on_match_all": ["component_a", "critical"]})
runner.run(tag_config={"skip_on_match_any": ["trivial", "known_failure"]})
Using Parallel Test Execution

Will enable multithreading for suites and tests without any limitations:

runner = Runner([ExampleTestSuite, ExampleTestSuite2])
runner.run(suite_multithreading=True, test_multithreading=True)

Will enable multithreading for suites and tests but allows to run maximum of 5 suites and up to 2 tests per suite in parallel:

runner = Runner([ExampleTestSuite, ExampleTestSuite2])
runner.run(suite_multithreading=True, suite_multithreading_limit=5, 
           test_multithreading=True, test_multithreading_limit=2)

limits not supported yet

For more info, see Parallel Test/Suite Execution.

Canceling Test Execution

If you are integrating Test Junkie into a bigger framework, its possible that you would like to programmatically stop test execution. Good news that Test Junkie allows, gracefully, to do just that. If you call cancel() on the Runner Object, the Runner will start marking tests and suites as canceled, which will trigger respective event listeners:

Canceling execution, does not abruptly stop the Runner - all of the suites will still "run" but it will be similar to skipping which will allow suites & tests to quickly, but in their natural fashion, finish running without locking up any of the resources on the machine where it runs.

runner.cancel()

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

test_junkie-0.1a3.tar.gz (23.0 kB view hashes)

Uploaded Source

Built Distribution

test_junkie-0.1a3-py3-none-any.whl (18.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page