Skip to main content

Advanced execution framework for test scenarios

Project description

Build Status codecov Maintainability Maintenance PyPI version shields.io PyPI pyversions Known Vulnerabilities

Downloads from PyPi
Downloads
Downloads
Downloads

Test Junkie Logo

Test Junkie is an advanced test runner for Python with built in reporting and analytics. Its packed with tons of configurable features.

Test Junkie is for:

  • QA Managers & Project Mangers, the reports that Test Junkie generates will be extremely useful at identifying risks and poor performers on your team. This is especially useful if you have a large team.
  • QA engineers, SDETs, and automation architects! Test Junkie has so much built in that to create a framework with Test Junkie, all you need to do is just create a wrapper that will pass settings and test suites to Test Junkie. You no longer need to worry about third party packages to run parameterized tests or the need to implement threading, its all built in!

Test Junkie was designed with reporting in mind, and you can see its reporting capabilities just by looking the demo console output or the demo HTML report

Still in ALFA, documentation may be incomplete and functionality of features is subject to change. If you find bugs, please report them.

Like this project? Support it by sharing it on your social media or donate through PayPal / back me on Patreon.

Table of content

Installation

  • If you don't have Test Junkie installed: pip install test_junkie
  • If you have Test Junkie installed but want to install the latest: pip install test_junkie --upgrade

Features

Decorators

@Suite

Test Junkie enforces suite based test architecture. Thus all tests must be defined within a class and that class must be decorated with @Suite. See example on Test Suites.

from test_junkie.decorators import Suite

@Suite()
class LoginFunctionality:
    ...

@Suite decorator supports the following decorator properties:

  • Meta: @Suite(meta=meta(name="Suite Name", known_bugs=[11111, 22222, 33333]))
  • Retry: @Suite(retry=2)
  • Skip: @Suite(skip=True)
  • Listeners: @Suite(listener=YourListener)
  • Rules: @Suite(rules=YourRules)
  • Parameters: @Suite(parameters=[{"fruits": ["apple", "peach"]}, None, "blue", [1, 2, 3]])
  • Parallel Restriction: @Suite(pr=[ATestSuite])
  • Parallelized: @Suite(parallelized=False)
  • Priority: @Suite(priority=1)
  • Feature: @Suite(feature="Login")
  • Owner: @Suite(owner="John Doe")

@beforeClass

This decorator will prioritize execution of a decorated function at the very beginning of a test suite. Decorated function will be executed only once at the very beginning of the test suite. Code which produces exception in the decorated function will be treated as a class failure which will mark all of the tests in the suite as ignored. On Ignore event listener will be called for each of the tests.

from test_junkie.decorators import beforeClass

...
@beforeClass()
def a_function():
    ...

@beforeClass does not support any special decorator properties.

@beforeTest

This decorator will prioritize execution of a decorated function before every test case in the suite. Decorated function will be executed once before every test case in the suite. Code which produces exception in the decorated function will be treated as a test failure/error and respective On Error or On Fail event listener will be called.

from test_junkie.decorators import beforeTest

...
@beforeTest()
def b_function():
    ...

@beforeTest does not support any special decorator properties.

@test

Test Junkie enforces suite based test architecture. Thus all tests must be defined within a class and be decorated with @test. See example on Test Suites. Code which produces exception in the decorated function will be treated as a test failure/error and respective On Error or On Fail event listener will be called. Function decorated with @afterTest will not be executed if exception is raised in a test case. On Success event listener will be called if test passes.

from test_junkie.decorators import test

...

@test()
def a_test():
    ...

@test()
def b_test():
    ...

@test decorator supports the following decorator properties:

@afterTest

This decorator will de-prioritize execution of a decorated function for the end of each test case in the suite. Decorated function will be executed once after every test cases in the suite. Code which produces exception in the decorated function will be treated as a test failure/error and respective On Error or On Fail event listener will be called.

from test_junkie.decorators import afterTest

...
@afterTest()
def c_function():
    ...

@afterTest does not support any special decorator properties.

@afterClass

This decorator will de-prioritize execution of a decorated function for the very end of a test suite. Decorated function will be executed only once at the very end of the test suite.

from test_junkie.decorators import afterClass

...
@afterClass()
def d_function():
    ...

@afterClass does not support any special decorator properties.

Skipping Tests/Suites

Test Junkie extends skipping functionality at the test level and at the suite level. You can use both at the same time or individually.

from test_junkie.decorators import Suite, test

@Suite()
class ExampleSuite:

    @test(skip=True)
    def a_test(self):

        assert True is False
  • Test level skip takes a boolean value, if True - test will be skipped and On Skip event listener will be called. Execution of tests will continue as usual if there are any remaining tests in the suite.
  • Test level skip can also take a function as my_function or my_function() in the earlier, it will evaluate the function prior to running the test while the later will evaluate as soon at your suite is imported anywhere in your code.
    • If your function has meta argument in the signature, Test Junkie will pass all of the test function's Meta information to it. All of this support is there in order to ensure that you have maximum flexibility to build custom business logic for skipping tests.
    • The only requirement is, function must return boolean value when evaluation completes.
from test_junkie.decorators import Suite, test

@Suite(skip=True)
class ExampleSuite:

    @test()
    def a_test(self):

        assert True is False
  • Suite level skip takes a boolean value, if True - all of the decorated functions in the suite will be skipped. On Skip event listener will NOT be called, instead On Class Skip will fire.

Retrying Tests/Suites

Test Junkie extends retry functionality at the test level and at the suite level. You can use both at the same time or individually. Code bellow uses both, test and suite, level retries.

from test_junkie.decorators import Suite, test

@Suite(retry=2)
class ExampleSuite:

    @test(retry=2)
    def a_test(self):

        assert True is False
  • Test level retry will retry the test, until test passes or retry limit is reached, immediately after the failure.
  • Suite level retry will kick in after all of the tests in the suite have been executed and there is at least one unsuccessful test. Test level retries will be honored again during the suite retry. Only unsuccessful tests will be retried.

With that said, the above test case will be retried 4 times in total.

Retry on Specific Exception

In addition to generic retries, Test Junkie support retrying of tests only on specific exception(s). @test decorator accepts a property retry_on which takes a list of exception object types. If such a list is provided, test will only be retried in case that it failed with that particular exception type.

...
    @test(retry=2, retry_on=[AssertionError])
    def a_test(self):
        # Will be retried because it will fail and produce AssertionError, 
        # and its configured to retry only on AssertionError
        assert True is False
...

No Retry on Specific Exception

Similar to Retry on Specific Exception, @test decorator accepts a list of exception object types. But instead, if no_retry_on is provided, it will retry test case only in case it failed with an exception type that is not part of the no_retry_on list.

...
    @test(retry=2, no_retry_on=[AssertionError])
    def a_test(self):
        # Won't be retried because it will fail and produce AssertionError, 
        # and its configured not to retry on AssertionError
        assert True is False
...

Parameterized Tests

Test Junkie allows you to run parameterized test scenarios out of the box and it allows all data types to be used as parameters.

Function objects can also be used to pass the parameters, as long as a list object is returned upon function execution. Advantage of using function object, is that Test Junkie will run the function when its appropriate to use the parameters. Typically this is useful when you have a function that takes a while to create parameters. When providing such function in a decorator, that function will be executed on import of the class - but you may not be running that particular class/suite thus you will be wasting time waiting for the function to run and generate parameters.

...
def long_running_function():
    """
    Assume this function makes many API and/or DB calls to create dynamic config based on the output from the API/DB
    This config will be used for parameters
    Since its expensive, it makes sense to call this function only when we actually need to use the parameters
    """
    return ["some", "data", ...]
...

from test_junkie.decorators import Suite, test

@Suite()
class ExampleSuite:

    @test(parameters=[{"fruits": ["apple", "peach"]}, None, "blue", [1, 2, 3]])
    def a_test(self, parameter):

        print("Test parameter: {}".format(parameter))

    @test(parameters=long_running_function)  # This will be executed only when Test Junkie starts running this suite
    def b_test(self, parameter):

        print("Test parameter: {}".format(parameter))

    @test(parameters=long_running_function())  # This will be executed on import of this suite
    def b_test(self, parameter):

        print("Test parameter: {}".format(parameter)) 
  • Any time parameterized test is defined, the decorated function must accept parameter in the function signature.
  • If parameterized test fails and retry is used, only the parameter(s) that test failed with will be retried.

Parameterized Suites

There is a slightly different spin, suite level parameters can apply to all of the decorated functions in the suite. You can control in which special functions or tests will use them. In the special functions, where you want to use suite level parameters, add suite_parameter to the function's signature.

In parameterized suites, tests which do not have suite_parameter in the signature, will run only once or as many times as they need to run to honor test level parameters and/or other test properties.

Similar to Test Level Parameters, you can use function objects to pass parameters to Test Junkie tests.

from test_junkie.decorators import Suite, test, beforeClass, beforeTest

@Suite(parameters=[{"fruits": ["apple", "peach"]}, None, "blue", [1, 2, 3]])
# or 
@Suite(parameters=long_running_function)  # will run when suite is executed by Test Junkie
# or
@Suite(parameters=long_running_function())  # will run on suite import
class ExampleSuite:

    @beforeClass()
    def before_class(self, suite_parameter):  # Setup functions can be parameterized in this way
        print("Before Class with suite parameter: {}".format(suite_parameter))

    @beforeTest()
    def before_test(self, suite_parameter):  # Setup functions can be parameterized in this way
        print("Before Test with suite parameter: {}".format(suite_parameter))

    @test()
    def a_test(self):  # Even though no params, this test will run once (there is no retry defined in this suite)

        pass

    @test(parameters=[1, 2, 3])
    def a_test(self, parameter):  # This test will honor only the test level parameters

        pass

    @test()
    def b_test(self, suite_parameter):

        print("Suite parameter: {}".format(suite_parameter))

    @test(parameters=[1, 2, 3])
    def c_test(self, parameter, suite_parameter):

        print("Test parameter: {}".format(parameter))
        print("Suite parameter: {}".format(suite_parameter))
  • Suite level parameters can be used at the same time with test level parameters.
  • If parameterized test fails and retry is used, only the parameter(s) that test failed with will be retried - yes this applies to the suite level parameters as well.
  • If wrong parameters are passed in, suite will be ignored and Retry wont be applicable in this case.

Parallel Test Execution

Test Junkie supports parallel execution out of the box. Two modes are available and both can be used at the same time:

  • Suite level multi threading: Allows to run N number of suites in parallel.
  • Test level multi threading: Allows to run N number of test cases in parallel (in total, not per suite).

N is the limit of threads that you want to use, it can be defined using arguments that are passed to the run() function of the Runner instance.

  • suite_multithreading_limit: Use to define max number of suites to run in parallel. Value has to be greater than 1 to enable multi threading.
  • test_multithreading_limit: Use to define max number of test to run in parallel. Value has to be greater than 1 to enable multi threading. This limit applies to parallelized parameterized tests as well.

See example for kicking off threaded test execution.

Controlling Parallel Execution at Suite level

  • Lets say you have suites: A, B, C and suite A can have a conflict with suite C, if it runs in parallel. Using the property pr (stands for parallel restriction) from the @Suite decorator which takes a list of class objects, you can let Test Junkie know that you don't want to run those suites in parallel.
    from my_suite.C import C
    from test_junkie.decorators import Suite
    
    @Suite(pr=[C])
    class A:
      ...
    
    Parallel restriction is bidirectional, meaning you only need to set it in A or C - not both (although you can, but that will most likely lead to import loop). Assuming you set it in A. When time comes to run suite A, Test Junkie will check to make sure that suite C is not running. Similar, when time comes to run suite C, Test Junkie will check to make sure suite A is not running, even though you did not set the restriction explicitly in suite C to avoid suite A.
  • If you flat out don't want to run a suite in parallel with any other suites, you can also set parallelized property of the @Suite decorator to False.

For usage examples see Using Parallel Execution.

Controlling Parallel Execution at Test level

  • If you have tests that may conflict with each other if ran in parallel, you can tell that to Test Junkie and those test will never be executed at the same time. To tell that to Test Junkie, use pr property from the @test decorator, it take a list of test objects. pr stands for parallel restriction, and this restriction is bi-directional, meaning if you have test A and test B that cannot be ran in parallel, it is enough to set pr in one of those tests, you do not need to do it in both.

    ...
    @test(pr=[SecuritySuite.policy_change])
    def login():
      """
      Lets assume this test is validating a simple login which may be impacted by another test which validates security
      policy settings which may restrict login from certain IP or revoke access to certain accounts - in this case
      we do not want to run this test as it may produce a false negative. Thus we use pr to set restriction. This will,
      also, prevent "SecuritySuite.policy_change" test to run during the execution of this test.
      """
      ...
    
  • Certain test cases you may not want to run in parallel with any other tests at all. In such cases set parallelized property of @test decorator to False. Generally, non-parallelized tests get de-prioritised and they will be ran at the very end, unless, they have a Priority set.

  • Tests have an additional threaded mode - by default this mode is disabled and only applies to parameterized tests. If test case is parameterized, you can choose to test those parameters in parallel. To do that, use parallelized_parameters property of @test decorator and set it to True. test_multithreading_limit will apply - each test executed with a parameter will consume a thread slot.

For usage examples see Using Parallel Execution.

Test Listeners

Test Junkie allows you to define test listeners which allow to execute your own code on a specific test event. Defining listeners is optional. This feature is typically useful when building large frameworks as it allows for seamless integration for reporting, post processing of errors, calculation of test metrics, alerts, artifact collection etc.

Listeners that you want to use are defined at the suite level and are supported by the @Suite decorator. This allows flexibility to support different types of tests with appropriate listener object without having to add complexity to one single listener to support all types of tests. For example if you are doing UI testing, you may have a listener that can take screenshots on specific events, but if you are doing API testing, you can use a different listener object which does not have any logic for taking screenshots. This helps to avoid complexity that would otherwise be added by the nested if statements.

In order to create a test listener you need to create a new class and inherit from Listener. After that, you can overwrite functions that you wish to support.

Every function that you override, must take **kwargs in its function's signature, take a look at the examples bellow.

Following test functions can be overwritten:

Following class(suite) functions can be overwritten:

from test_junkie.listener import Listener

class MyTestListener(Listener):

    def __init__(self, **kwargs):

        Listener.__init__(self, **kwargs)
    ...

On Success

On success event is triggered after test has successfully executed, that means @beforeTest (if any), @test, and @afterTest (if any) decorated functions have ran without producing an exception.

...
    def on_success(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Fail

On failure event is triggered after test has produced AssertionError. AssertionError must be unhandled and
thrown during the code execution in functions decorated with @beforeTest (if any), @test, or @afterTest (if any).

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_failure(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Error

On error event is triggered after test has produced any exception other than AssertionError. Exception must be unhandled and thrown during the code execution in functions decorated with @beforeTest (if any), @test, or @afterTest (if any).

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_error(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Ignore

On ignore event is triggered when a function decorated with @beforeClass produces an exception or when incorrect arguments are passed to the @test decorator.

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_ignore(self, **kwargs):
        # Write your own code here
        print(kwargs)
    ...

On Cancel

On Cancel event is triggered sometime* after cancel() is called on the active Runner object. See Canceling test execution for more info. It is not guaranteed that this event will be called, however. Assuming that, cancel() was called while Runner is in the middle of processing a test suite, yes it will be called on all of the remaining tests that have not yet been executed. All of the previously executed tests wont be effected. Tests in the following suites wont be marked canceled neither, the suites will be "skipped" if you will but On Class Cancel will be called on all of the suites.

...
    def on_cancel(self, **kwargs):
        # Write your own code here
        print(kwargs)
    ...

On Skip

On skip event is triggered, well, when tests are skipped. Skip is supported by @test & @Suite function decorators. See Skipping Tests/Suites for examples. Skip event can also be triggered when Using Runner with tags.

...
    def on_skip(self, **kwargs):
        # Write your own code here
        print(kwargs)
    ...

On Class In Progress

On Class In Progress event is triggered when Test Junkie is starting to run the class(Suite). If class was skipped or canceled, this even wont fire. This event can only be fired once per suite, no matter the number of suite parameters or suite retries.

...
    def on_class_complete(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Class Skip

On Class Skip event is triggered, when test suites are skipped. Skip is supported by @test & @Suite function decorators. See Skipping Tests/Suites for examples.

...
    def on_class_skip(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Class Cancel

On Class Cancel event is triggered sometime* after cancel() is called on the active Runner object. See Canceling test execution for more info.

Event will apply only to those suites that are executed in scope of that Runner object, see Running Test Suite(s) for more info.

...
    def on_class_cancel(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Class Ignore

On Class Ignore event is triggered when Test Junkie detects bad arguments being used for @Suite properties. For example, if you pass in empty parameters list, it does not make sense to run any tests in the suite because its assumed that either the setup functions or the tests rely on those parameters and sense they are empty the test scenarios will not be accurate thus Test Junkie will ignore the suite.

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_class_ignore(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Before Class Failure

On Before Class Failure event is triggered only when a function decorated with @beforeClass produces AssertionError. On Ignore will also fire.

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_before_class_failure(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Before Class Error

On Before Class Error event is triggered only when a function decorated with @beforeClass produces exception other than AssertionError. On Ignore will also fire.

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_before_class_error(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On After Class Failure

On After Class Failure event is triggered only when a function decorated with @afterClass produces AssertionError. No test level event listeners will be fired.

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_after_class_failure(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On After Class Error

On After Class Error event is triggered only when a function decorated with @afterClass produces exception other than AssertionError. No test level event listeners will be fired.

Worth noting that this event will provide:

  • exception as part of the kwargs, this is the actual exception that was thrown during the test.
  • trace as part of the kwargs, this is the actual full traceback for the exception (aka traceback.format_exc()).
...
    def on_after_class_error(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

On Class Complete

On Class Complete event is triggered when Test Junkie is done running all of the tests within the class(Suite). If class was skipped or canceled, this even wont fire. This event can only be fired once per suite, no matter the number of suite parameters or suite retries.

...
    def on_class_complete(self, **kwargs):
        # Write your own code here
        print(kwargs) 
    ...

Meta

Functions of classes that are assigned as Listeners and inherit from Listener, have access to the test's and suite's meta information if such was passed in to the @Suite or @test decorator. Metadata can be of any data type and has no effect on how Test Junkie runs the tests. Metadata is simply here for you, so you can associate additional details with the test case and use it in your testing process if necessary.

You can use meta to set properties such as:

  • Test name, suite name, description, expected results etc - anything that can be useful in reporting
  • Test case IDs - if you have a test management system, leverage it to link test scripts directly to the test cases and further integrations can be implemented from there
  • Bug ticket IDs - if you have a bug tracking system, leverage it to link your test case with issues that are already known and allow you to process failures in a different manner and/or allow for other integrations with the tracking system
from test_junkie.decorators import Suite, test
from test_junkie.meta import meta


@Suite(listener=MyTestListener, 
       meta=meta(name="Your suite name", id=123444))
class ExampleSuite:

    @test(meta=meta(name="You test name", 
                    id=344123, 
                    known_bugs=[11111, 22222, 33333],
                    expected="Assertion must pass"))
    def a_test(self):

        assert True is True

Metadata that was set in the code above can be accessed in any of the Test Listeners like so:

from test_junkie.listener import Listener


class MyTestListener(Listener):

    def __init__(self, **kwargs):

        Listener.__init__(self, **kwargs)

    def on_success(self, **kwargs):

        class_meta = kwargs.get("properties").get("class_meta")
        test_meta = kwargs.get("properties").get("test_meta")

        print("Suite name: {name}".format(name=class_meta["name"]))
        print("Suite ID: {id}".format(id=class_meta["id"]))
        print("Test name: {name}".format(name=test_meta["name"]))
        print("Test ID: {id}".format(id=test_meta["id"]))
        print("Expected result: {expected}".format(expected=test_meta["expected"]))
        print("Known bugs: {bugs}".format(bugs=test_meta["known_bugs"]))

Meta information can be updated and/or added from within your test cases using the Meta.update() function. Keep in mind, only test level meta can be updated - suite level meta should never change. Meta.update() takes 3 positional arguments, those arguments are required in order to locate correct TestObject:

  • self, the class instance of the current test.
  • parameter: (optional) this is the current parameter that the test is running with. If test case is not parameterized, do not pass anything.
  • suite_parameter: (optional) this is the current suite parameter that the test is running with. If test case is not parameterized with suite level parameters, do not pass anything.

Any other arguments that are passed in to the Meta.update() function, will be pushed to the test's meta definition and will be available in the Test Listeners as shown in the example above. If you call update on a key that already exists in the meta definition, the value for that key will be overwritten.

from test_junkie.decorators import test
from test_junkie.meta import Meta
...
@test()
def a_test(self):
    # this test does not have any parameters, thus you only have to pass self to the Meta.update() function
    ...
    Meta.update(self, name="new test name", expected="updated expectation")
    ...

@test(parameters=[1, 2, 3])
def b_test(self, parameter):
    # this test is running with test parameters, thus you have to pass it to the Meta.update() function
    ...
    Meta.update(self, parameter=parameter, name="new test name", expected="updated expectation")
    ...

@test(parameters=[1, 2, 3])
def c_test(self, parameter, suite_parameter):
    # this test is running with test and suite parameters, thus you have to pass those to the Meta.update() function
    ...
    Meta.update(self, parameter=parameter, suite_parameter=suite_parameter,
                name="new test name", expected="updated expectation") 
    ...

Suite & Test Assignees

If you have a large QA team that uses the same framework for test automation, you will be pleased to know that Test Junkie supports test assignees or test owners.

Owners of tests can be defined in two ways:

  • At the suite level, using the owner property supported by the @Suite decorator, which takes a string. This will apply the owner to all of the underlying tests in that suite.
  • At the test level, using the owner property supported by the @test decorator, which takes a string. This will overwrite the @Suite owner for that particular test if one was set.

Test Junkie can do specialized Reporting based on test assignees/owners.

Rules

You may have a situation where you find your self copy pasting code from one suite's @beforeClass or @beforeTest function(s) into another. Test Junkie allows you to define Rules in such cases. Rule definitions are reusable, similar to the Listeners and also supported by the @Suite decorator.

In order to create Rules, you need to create a new class and inherit from TestRules. After that, you can overwrite functions that you wish to use.

from test_junkie.rules import Rules

class MyRules(Rules):

    def __init__(self, **kwargs):

        Rules.__init__(self, **kwargs)

    def before_class(self):
        # write your code here
        pass

    def before_test(self):
        # write your code here
        pass

    def after_test(self):
        # write your code here
        pass

    def after_class(self):
        # write your code here
        pass

To use the Rules you just created, reference them in the suite definition:

from test_junkie.decorators import Suite


@Suite(rules=MyRules)
class ExampleSuite:
...

Execution priority vs the Decorators:

  • before_class() will run right before the function decorated with @beforeClass.
  • before_test() will run right before the function decorated with @beforeTest.
  • after_test() will run right after the function decorated with @afterTest.
  • after_class() will run right after the function decorated with @afterClass.

Failures/Exceptions, produced inside this functions, will be treated similar to their respective Decorators.

Priority

Test Junkie has a priority system which is optimized for performance. It is possible to influence priority for Tests and Suites. Priority starts at 1 and goes up from there. 1 being the highest priority. To set priority use priority property of the @Suite or @test decorator.

  • Suites & Tests without priority and disabled parallelized execution get de-prioritized the most
  • Suites & Tests without priority and enabled parallelized execution get prioritized higher
  • Suites & Tests with priority get prioritised according to the priority that was set. However, they are always prioritised above those that do not have any priority

Features & Components

Execution of tests can be initiated based on features and/or components, similar to Tags.

  • Suites can be labeled with a feature property
  • Tests can be labeled with a component property

Labeling of suites and tests will effect the way Reports are presented to you. Report will be broken down by features and components and their health status, based on the test results.

It is highly recommended to leverage this feature from the very beginning and structure your regression coverage in such a way that when there is an update to the codebase, for existing features, you can kick of a subset of tests that will cover the regression for the feature or component of the feature that was updated.

For example, lets say there is a Login feature. Within that feature, there may be components for regular Authentication, OAuth, Two Factor Authentication, Logout etc. Now, lets say there is an update to the login feature which touches only code path for OAuth - Now you can have Test Junkie only run the tests which are labeled with component="OAuth".

Tags

Test Junkie allows you to tag test scenarios at the @test decorator level. You can use the tags to run or skip test cases that match the tags when you run your tests. Following tag configurations are supported by the Runner's run() function:

  • run_on_match_all - Will run test cases that match all of the tags in the list. Will trigger On Skip event for all of the tests that do not match the tags or do not have tags.
  • run_on_match_any - Will run test cases that match at least one tag in the list Will trigger On Skip event for all of the tests that do not match the tags or do not have tags.
  • skip_on_match_all - Will skip test cases that match all of the tags in the list. Will trigger On Skip event.
  • skip_on_match_any - Will skip test cases that match at least one tag in the list. Will trigger On Skip event.

All of the configs can be used at the same time. However, this is the order that will be honored:

skip_on_match_all -> skip_on_match_any -> run_on_match_all -> run_on_match_any which ever matches first will be executed or skipped.

See Using Runner with Tags for usage examples.

Reporting

Test Junkie's HTML Report

Live Demo

Test Junkie is tracking a number of metrics during test execution:

  • Absolute KPIs (# of tests executed, % of passed tests, total runtime, average time per test etc)
  • Local resource trends for CPU and MEM (You'll need to set monitor_resources=True when initiating the Runner object to get this data)
  • Test results by Features
  • Test results by Tags
  • Test results by Assignees
  • More coming soon...

Test Junkie HTML Report Graphics

You can ask Test Junkie to visualize those metrics in the form of an HTML report. To do that, you need to provide a path to a file where you want to save the report during the initialization of the Runner. Once tests are done running, the report will be produced in the provided file.

For example:

from test_junkie.runner import Runner
runner = Runner(suites=[...], html_report="/path/to/file/report.html")

Big thanks to Charts JS! Without their charts, visualization of data would not be possible.

JSON Reports

JSON reports are used under the hood for all of the other reports produced by Test Junkie. You can use JSON reports to slice the data in the way that is meaningful to you.

JSON reports can be extracted from a number of objects, all of which will be accessible after the test have finished executing:

from test_junkie.runner import Runner
runner = Runner(suites=[...])
aggregator = runner.run()
  • From the SuiteObject & TestObject classes:
    suite_objects = runner.get_executed_suites()
    for suite in suite_objects:
        test_objects = suite.get_test_objects()
        print(suite.metrics.get_metrics())
        for test in test_objects:
            print(test.metrics.get_metrics())
    
  • From Aggregator object:
    print(aggregator.get_report_by_tags())
    print(aggregator.get_report_by_features())
    print(aggregator.get_basic_report())
    print(aggregator.get_report_by_owner())
    

Jenkins XML Report

Test Junkie can also produce basic XML reports. Similar to the HTML Report, you'll need to initialize the Runner object with appropriate arguments to get the XML file. Once tests are done running, the report will be produced in the provided file.

For example:

from test_junkie.runner import Runner
runner = Runner(suites=[...], xml_report="/path/to/file/report.xml")

Its advised against using this file to analyze test results as its very generic. This feature is primary here only to support Jenkins' JUnit Plugin that can visualize this data in a trended graph of build vs test results over time.

Parameterized tests, are treated as stand alone tests in reporting, thus you may see multiple entries for the same test name, this is OK if that test is parameterized. For example, test a is parameterized thus following is OK:

<root>
    <testsuite failures="0" name="ExampleSuite" passed="4" tests="4">
        <testcase name="a" status="success" />
        <testcase name="a" status="success" />
        <testcase name="a" status="success" />
        <testcase name="b" status="success" />
    </testsuite>
</root>

Runner Object

Runner object is what you will use to run the test suites. At this point Runner is supporting a number of different configurations that it deserves its own section. As shown in the Examples section, in order to run tests, you need to create an instance of the Runner and then call run() on it. The constructor of the Runner accepts the following properties:

  • Suites: runner = Runner(suites=[Login, AuthMicroServices, Registration]), this is the only required property, because Test Junkie needs to know what you want to run. Make sure you are supplying a list of class objects that were decorated with @Suite decorator.
  • Monitor Resources: runner = Runner(suites=[...], monitor_resources=True), this enables Test Junkie to track memory and cpu usage on the machine where and while its running tests (disabled by default). This data can be used by the HTML Reporting.
  • HTML Report: runner = Runner(suites=[...], html_report="path/to/report.html"), this enables Test Junkie HTML report. Once the tests complete, Test Junkie will process all of the available data (if you enabled monitor_resources, memory and cpu information will be included into the HTML report) and save the report to the location that you provided.
  • XML report: runner = Runner(suites=[...], xml_report="path/to/report.xml"), similar to the HTML report, but instead creates very basic, Jenkins friendly, XML report.
Exposed methods

Runner instance has 3 exposed methods:

  • run(): This method is special. Not only, as the name suggests, it initiates the actual test cycle but it, also, allows to define more configurations for running your tests, such as:
    • Features: runner.run(features=["Login"]), you can tag suites based on features that they are testing, and you can choose to run tests only for those features.
    • Components: runner.run(components=["2FactorAuth"]), you can tag tests based on components of a feature that they are testing, and you can choose to run tests only for those components.
    • Owners: runner.run(owners=["John Cena"]), you can tag tests based on who owns the feature/component, and you can choose to run tests only that belong to a particular member(s) of your team.
    • Tag Config: runner.run(tag_config={"run_on_match_all": ["pre_deploy", "critical"]}), a single test can have many tags, you can use tag_config to run tests only for the tag or collection of tags that you care about at the moment. You can also use it in order to skip tests with certain tags.
    • Tests: runner.run(tests=[LoginSuite.positive_login, LoginSuite.negative_login]), and of course you can just pass the test objects that you want to run.
    • Suite Multi-threading: runner.run(suite_multithreading_limit=5), enables multi-threading for suites.
    • Test Multi-threading: runner.run(test_multithreading_limit=5), enables multi-threading for tests.
  • cancel(): This will trigger a graceful exit for the currently active test cycle. Read more about it here.
  • get_executed_suites(): This will return a list of test_junkie.objects.SuiteObjects. This SuiteObject can be used to analyze anything from test results to performance of tests in great detail.

Examples

Test Suite

from random import randint
from test_junkie.decorators import test, Suite, beforeTest, beforeClass, afterTest, afterClass
from test_junkie.meta import meta
from example_package.example_listener import ExampleListener

# Listener here is optional as all of the other parameters
@Suite(listener=ExampleListener, retry=2, 
       meta=meta(suite_name="Demo Suite"))
class ExampleTestSuite(object):

    @beforeClass()
    def before_class(self):  # Functions are not restricted to any naming conventions
        print("BEFORE CLASS!")

    @beforeTest()
    def before_test(self):
        print("BEFORE TEST!")

    @afterTest()
    def after_test(self):
        print("AFTER TEST!")

    @afterClass()
    def after_class(self):
        print("AFTER CLASS!")

    # meta function is used for metadata, slightly cleaner then using a dict
    # all parameters are optional
    @test(parameters=[1, 2, 3, 4, 5], retry=2,
          meta=meta(name="Test 'A'",
                    test_id=344941,
                    known_bugs=[],
                    expected="Assertion must pass"), 
          tags=["component_a", "critical"])
    def a_test(self, parameter):  # Functions are not restricted to any naming conventions
        print("TEST 'A', param: ", parameter)
        assert randint(1, 5) == parameter, "your error message"

    # regular dict is used for metadata
    @test(meta={"name": "Test 'B'",
                "test_id": 344123,
                "known_bugs": [11111, 22222, 33333],
                "expected": "Assertion must pass"},
          tags=["component_a", "trivial", "known_failure"])
    def b_test(self):
        print("TEST 'B'")
        assert True is True

    @test(skip=True)
    def c_test(self):
        print("TEST 'C'")

Executing Test Suites

Use the run() function from the Runner instance to start running tests. run() supports a number of properties:

from test_junkie.runner import Runner
from example_package.example_test_suite import ExampleTestSuite

runner = Runner([ExampleTestSuite])
runner.run()

Run tests for certain Features

You can request Test Junkie's Runner to run() regression for specific features:

runner.run(features=["Login"])

Run tests for certain Components

You can request Test Junkie's Runner to run() regression for specific components:

runner.run(components=["OAuth"])

or for specific components of a feature:

runner.run(features=["Login"], components=["OAuth"])

Executing with Tags

Runner.run() supports tag_config keyword that defines the configuration you want to use for the tags. All of the supported configurations as well as honor priority are defined in the Tags section.

runner.run(tag_config={"run_on_match_all": ["component_a", "critical"]})
runner.run(tag_config={"skip_on_match_any": ["trivial", "known_failure"]})

Run tests assigned to specific owners

If you want to run test cases that are assigned to specific team members:

runner.run(owners=["John Doe", "Jane Doe"])

Run specific test cases

If you want to run specific test cases:

runner.run(tests=[ExampleSuiteA.test_a, ExampleSuiteB.test_b])

Using Parallel Test Execution

runner = Runner([ExampleTestSuite, ExampleTestSuite2])
runner.run(suite_multithreading_limit=5, test_multithreading_limit=5)

For more info, see Parallel Test/Suite Execution.

Canceling Test Execution

If you are integrating Test Junkie into a bigger framework, its possible that you would like to programmatically stop test execution. Good news that Test Junkie allows, gracefully, to do just that. If you call cancel() on the Runner Object, the Runner will start marking tests and suites as canceled, which will trigger respective event listeners:

Canceling execution, does not abruptly stop the Runner - all of the suites will still "run" but it will be similar to skipping which will allow suites & tests to quickly, but in their natural fashion, finish running without locking up any of the resources on the machine where it runs.

runner.cancel()

Bug report

If you found an issue with Test Junkie, please file a bug report.

All bug reports must have:

  1. Python version python --version
  2. Test Junkie version pip show test_junkie
  3. Command used, if running via terminal
  4. Smallest code snippet that can reproduce the issue
  5. Expected behaviour
  6. Actual behaviour

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

test_junkie-0.4a8.tar.gz (79.3 kB view hashes)

Uploaded Source

Built Distribution

test_junkie-0.4a8-py3-none-any.whl (72.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page