Skip to main content

Toolkit for building automated integration checks

Project description

Questions Three

A Library for Serious Software Interrogators (and silly ones too)

Stop! Who would cross the Bridge of Death must answer me these questions three, ere the other side he see.

-- The Keeper

Why you want this

The vast majority of support for automated software checking falls into two main groups: low-level unit checking tools that guide design and maintain control over code as it is being written, and high-level system checking tools that reduce the workload of human testers after the units have been integrated.

The first group is optimized for precision and speed. A good unit check proves exactly one point in milliseconds. The second group is optimized for efficient use of human resources, enabling testers to repeat tedious actions without demanding much (or any) coding effort.

Engineering is all about trade-offs. We can reduce the coding effort, but only if we are willing to sacrifice control. This makes the existing high-level automation tools distinctly unsatisfactory to testers who would prefer the opposite trade: more control in exchange for the need to approach automation as a bona-fide software development project.

If you want complete control over your system-level automation and are willing to get some coding dirt under your fingernails in exchange, then Questions Three could be your best friend. As a flexible library rather than an opinionated framework, it will support you without dictating structures or rules. Its features were designed to work together, but you can use them separately or even integrate them into the third-party or homegrown framework of your choice.

A note on heretical terminology

The vast majority of software professionals refer to inspection work done by machines as "automated testing." James Bach and Michael Bolton make a strong case that this is a dangerous abuse of the word "testing" and suggest that we use "checking" instead when we talk about executing a procedure with a well-defined expected outcome.

Questions Three tries to maintain neutrality in this debate. Where practical, it lets you choose whether you want to say "test" or "check." Internally, it uses "test" for consistency with third-party libraries. As the public face of the project, this documentation falls on the "check" side. It says "check suite" where most testers would say "test suite."

Orientation Resources

  • Article: "Waiter, There's a Database in My Unit Test!" explains the differences between unit, integration, and system testing and the role for each.

  • Video: "Industrial Strength Automation" presentation from STARWEST 2019 makes the cases for and against building a serious automation program. It concludes with an extended discussion on the history, purpose, and design of Questions Three.

What's in the Box

Optional Packages

Quick Start

Install questions-three

pip install questions-three

Write the suite

from questions_three.scaffolds.check_script import check, check_suite

with check_suite('ExampleSuite'):

  with check('A passing check'):
      assert True, 'That was easy'

Run the suite

No special executors are required. Just run the script:


Review the results

The console output should look like this:

2018-08-13 14:52:55,725 INFO from questions_three.reporters.event_logger.event_logger: Suite "ExampleSuite" started
2018-08-13 14:52:55,726 INFO from questions_three.reporters.event_logger.event_logger: Check "A passing check" started
2018-08-13 14:52:55,726 INFO from questions_three.reporters.event_logger.event_logger: Check "A passing check" ended
2018-08-13 14:52:55,729 INFO from questions_three.reporters.event_logger.event_logger: Suite "ExampleSuite" ended

There should also be a reports directory which contains a report:

> ls reports
ExampleSuite.xml    jenkins_status

ExampleSuite.xml is a report in the JUnit XML format that can be consumed by many report parsers, including Jenkins CI. It gets produced by the junit_reporter module.

jenkins_status is a plain text file that aggregates the results of all test suites from a batch into a single result which Jenkins can display. It gets produced by the jenkins_build_status module.


Scaffolds provide a basic structure for your checks. Their most important function is to publish events as your checks start, end, and fail.

The top-to-bottom script scaffold

from questions_three.scaffolds.check_script import check, check_suite

with check_suite('ExampleSuite'):

  with check('A passing check'):
      assert True, 'That was easy'

  with check('A failing check'):
      assert False, 'Oops'

If you don't like saying "check," you can say "test" instead:

from questions_three.scaffolds.test_script import test, test_suite

with test_suite('ExampleSuite'):

  with test('A passing check'):
      assert True, 'That was easy'

  with test('A failing check'):
      assert False, 'Oops'

This code is an ordinary executable python script, so you can simply execute it normally.


The xUnit style scaffold

As its name suggests, the xUnit scaffold implements the well-worn xUnit pattern.

from questions_three.scaffolds.xunit import TestSuite, skip

class MyXunitSuite(TestSuite):

    def setup_suite(self):
        Perform setup that affects all tests here
        Changes to "self" will affect all tests.
        print('This runs once at the start of the suite')

    def teardown_suite(self):
        print('This runs once at the end of the suite')

    def setup(self):
        Perform setup for each test here.
        Changes to "self" will affect the current test only.
        print('This runs before each test')

    def teardown(self):
        print('This runs after each test')

    def test_that_passes(self):
        The method name is xUnit magic.
        Methods named "test..." get treated as test cases.
        print('This test passes')

    def test_that_fails(self):
        print('This test fails')
        assert False, 'I failed'

    def test_that_errs(self):
        print('This test errs')
        raise RuntimeError('I tried to think but nothing happened')

    def test_that_skips(self):
        print('This test skips')
        skip("Don't do that")

The most important advantage of the xUnit scaffold over the script one is that it automatically repeats the same set-up and tear-down routines between test_... functions. Its main disadvantage is that the suites aren't as beautiful to read.

Thanks to some metaclass hocus-pocus which you're free to gawk at by looking at the source code, this too is an ordinary Python executable file:


The Test Table Scaffold

The Test Table scaffold was designed to support two use cases:

  1. You would like to repeat the same procedure with different sets of arguments.
  2. You would like to execute the same procedure multiple times to measure performance.

Example test table that varies arguments

from expects import expect, equal
from questions_three.scaffolds.test_table import execute_test_table

    ('x', 'y', 'expect sum', 'expect exception'),
    (2, 2, 4, None),
    (1, 0, 1, None),
    (0, 1, 0, None),
    (0.1, 0.1, 0.2, None),
    (1, 'banana', None, TypeError),
    (1, '1', None, TypeError),
    (2, 2, 5, None))

def test_add(*, x, y, expect_sum):
    expect(x + y).to(equal(expect_sum))

    suite_name='TestAddTwoThings', table=TABLE, func=test_add)

Example test table that measures performance

from questions_three.scaffolds.test_table import execute_test_table

    ('operation', 'sample size'),
    ('1 + 1', 30),
    ('1 * 1', 60),
    ('1 / 1', 42))

def calculate(operation):

    table=TABLE, func=calculate, randomize_order=True)

The optional randomize_order argument instructs the scaffold to execute the rows in a random order (to mitigate systematic bias that could affect measurements).

For each row that exits cleanly (no assertion failures or other exceptions), the scaffold publishes a SAMPLE_MEASURED event that a reporter can collect. For example, the built-in EventLogger logs each of these events, including the row execution time.

Like the other built-in scaffolds, the Test Table produces plain old Python executable scripts.

Building your own scaffold

Nothing stops you from building your own scaffold. The test_script scaffold makes a good example of the services your scaffold should provide. The xUnit scaffold is much more difficult to understand (but more fun if you're into that sort of thing).

The key to understanding scaffold design is to understand the event-driven nature of Questions Three. Scaffolds are responsible for handling exceptions and publishing the following events:



In Questions Three, "reporter" is a broad term for an object that listens for an event, converts it to a message useful to someone or something and sends the message. Built-in reporters do relatively dull things like sending events to the system log and producing the Junit XML report, but there is no reason you couldn't build a more interesting reporter that launches a Styrofoam missile at the developer who broke the build.

Built-in reporters

Name Events it subscribes to what it does
Artifact Saver ARTIFACT_CREATED, REPORT_CREATED Saves artifacts to the local filesystem.
Event Logger all test lifecycle events Sends messages to system log.
Junit Reporter all test lifecycle events Builds Junit XML reports and publishes them as REPORT_CREATED events.
Result Compiler all test lifecycle events Reports how many tests ran, failed, etc, and how long they took. Publishes SUITE_RESULTS_COMPILED after SUITE_ENDED.

Custom reporters

A reporter can do anything you dream up and express as Python code. That includes interacting with external services and physical objects. Think "when this occurs during a test run, I want that to happen." For example, "When the suite results are compiled and contain a failure, I want a Slack message sent to the channel where the developers hang out."

Building a custom reporter

Result Compiler provides a simple example to follow. You don't have to copy the pattern it establishes, but it's an easy way to start. The ResultCompiler class has one method for each event to which it subscribes. Each method is named after the event (e.g. on_suite_started). These method names are magic. The imported subscribe_event_handlers function recognizes the names and subscribes each method to its respective event. The activate method is mandatory. The scaffold calls it before the suite starts. activate performs any initialization, most importantly subscribing to the events.

Installing a custom reporter

  1. Ensure that the package containing your reporter is installed.
  2. Create a text file that contains the name of the reporter class, including its module (e.g. my_awesome_reporters.information_radiators.LavaLamp). This file can contain as many reporters as you would like, one per line.
  3. Set the environment variable CUSTOM_REPORTERS_FILE to the full path and filename of your text file.

Event Broker

The Event Broker is Questions Three's beating heart. It's how the components communicate with one another. If you're not in the business of building custom components and plugging them in, you won't need to think about the Event Broker. If you are, it's all you'll need to think about.

The Event Broker is little more than a simple implementation of the Publish/Subscribe Pattern. Component A subscribes to an event by registering a function with the Event Broker. Component B publishes the event with an optional dictionary of arbitrary properties. The Event Broker calls the subscriber function, passing it the dictionary as keyword arguments.

An event can be any object. Internally, Questions Three limits itself to members of an enum called TestEvent. It's defined in questions_three.constants.

An event property can also be any object. Property names are restricted to valid Python variable names so the Event Broker can send them as keyword arguments.

HTTP Client

The HTTP client is a wrapper around the widely-used requests module, so it can serve as a drop-in replacement. Its job in life is to integrate requests into the event-driven world of Questions Three, doing things like publishing an HTTP transcript when a check fails. It also adds a few features that you can use. Nearly all of the documentation for requests applies to HttpClient as well. There are two deviations, one significant and one somewhat obscure.

Deviation 1: HTTP Client raises an exception when it encounters an exceptional status code

When the HTTP server returns an exceptional status code (anything in the 400 - 599 range), requests simply places the status code in the response as it always does and expects you to detect it. HTTP Client, by contrast, detects the status for you and raises an HttpError. There is an HttpError subclass for each exceptional status code defined by RFC 7231 (plus one from RFC 2324 just for fun), so you can be very specific with your except blocks. For example:

from questions_three.exceptions.http_error import HttpImATeapot, HttpNotFound, HttpServerError
from questions_three.http_client import HttpClient

except HttpImATeapot as e:
  # This will catch a 418 only
  # e.response is the requests.Response object returned by `requests.get`
except HttpNotFound:
  # This will catch a 404 only
except HttpServerError:
  # This will catch anything in the 500-599 range.

Deviation 2: json is not allowed as a keyword argument

requests allows you to write this:'', json=['spam', 'eggs', 'sausage', 'spam'])

HTTP Client does not support this syntax because it interferes with transcript generation. Instead, write this:

HttpClient().post('', data=json.dumps(['spam', 'eggs', 'sausage', 'spam'])

New feature: simplified cookie management

Instead of creating a requests.Session, you can simply do this:

client = HttpClient()

The client will now save cookies sent to it by the server and return them to the server with each request.

New feature: persistent request headers

This is particularly useful for maintaining an authenticated session:

client = HttpClient()
client.set_persistent_headers(session_id='some fake id', secret_username='bob')

The client will now send the specified headers to the server with each request.

New feature: callbacks for exceptional HTTP responses

Instead of putting each request into its own try/except block, you can install a generic exception handler as a callback:

def on_not_found(exception):
  mother_ship.beam_up(exception, exception.response.text)

client = HttpClient()
client.set_exceptional_response_callback(exception_class=HttpNotFound, callback=on_not_found)

In the example above, the server will respond to the GET request with an HTTP 404 (Not Found) response. The client will notice that it has a callback for the HttpNotFound exception, so will call on_not_found with the HttpNotFound exception as the exception keyword argument.

If a callback returns None (as in the example above), the client will re-raise the exception after it processes the callback. If the callback returns anything else, the client will return whatever the callback returns. Please observe the Principle of Least Astonishment and have your callback return either None or else an HttpResponse object.

Installed callbacks will apply to child exception classes as well, so a callback for HttpClientError will be called if the server returns an HttpNotFound response (because HttpClientError is the set of all 4xx responses and HttpNotFound is 404).

You can install as many callbacks as you would like, with one important restriction. You may not install a parent class or a child class of an exception that already has an associated callback. For example, you may install both HttpNotFound and HttpUnauthorized, but you may not install both HttpNotFound and HttpClientError because HttpClientError is a parent class of HttpNotFound.

See questions_three/exceptions/ for complete details of the HttpError class hierarchy. It follows the classification scheme specified in RFC 7231.

Tuning with environment variables

HTTP_PROXY This is a well-established environment variable. Set it to the URL of your proxy for plain HTTP requests.

HTTPS_PROXY As above. Set this to the URL of your proxy for secure HTTP requests

HTTPS_VERIFY_CERTS Set this to "false" to disable verification of X.509 certificates.

HTTP_CLIENT_SOCKET_TIMEOUT Stop waiting for an HTTP response after this number of seconds.

GraphQL Client

The GraphQL Client is a wrapper around the HTTP Client that allows for a simple way of making and handling requests against a GraphQL endpoint. Since the HTTP Client is doing all the heavy lifting, there is only a few custom behaviors that the GraphQL Client has.

Using the GraphQL Client

client = HttpClient() # This is where you would authenticate, if needed
graphql_client = GraphqlClient(http_client=client, url='')

your_important_query = """
    query {

execute is a neutral method that makes POST requests against your GraphQL endpoint for either Queries or Mutations. The first argument of execute is always the operation that you are trying to perform, and any key-word arguments afterwards are turned into your given variables.

your_important_query = """
    query ($id: String!) {
graphql_client.execute(your_important_query, id='1234')

Upon making your request (that does not result in an HTTP Error), you will either get a GraphqlResponse object, or if you received errors in your response, an OperationFailed exception will be raised.

  • GraphqlResponse objects have the following:
    • .http_response property: The requests.Response object returned by the HTTP Client
    • .data property: The JSON representation of your response
    • .data_as_structure property: The Structure object representation of your response
  • OperationFailed exceptions have the following:
    • .http_response property: The requests.Response object returned by the HTTP Client
    • .data property: The JSON representation of your (successful parts of the) response
    • .errors property: The JSON representation of the errors included in your response
    • .operation property: The query or mutation sent in your request
    • .operation_variables property: The variables send in your request
    • When raised, the exception message will include the errors strings, or the entire error collection

Logging Subsystem

Questions Three extends Python's logging system to do various things internally that won't matter to most users. However, there's one feature that may be of interest. You can customize how verbose/noisy any given module will be. Most common targets are event_broker when you want to see all the events passing through and http_client when you want excruciating detail about every request and response.


This works with any Questions Three module and any log level defined in the Fine Python Manual.

You can make it work with your custom components too:

from questions_three.logging import logger_for_module

log = logger_for_module(__name__)'I feel happy!')

Vanilla Functions

You'll find these in questions_three.vanilla. The unit tests under tests/vanilla provide examples of their use.

b16encode() Base 16 encode a string. This is basically a hex dump.

call_with_exception_tolerance() Execute a function. If it raises a specific exception, wait for a given number of seconds and try again up to a given timeout

format_exception() Convert an exception to a human-friendly message and stack trace

path_to_entry_script() Return the full path and filename of the script that was called from the command line

random_base36_string() Return a random string of a given length. Useful for generating bogus test data.

string_of_sequential_characters() Return a string of letters and numbers in alphabetical order. Useful for generating bogus but deterministic test data.

url_append() Replacement for urljoin that does not eliminate characters when slashes are present but does join an arbitrary number of parts.

wait_for() Pauses until the given function returns a truthy value and returns the value. Includes a throttle and a timeout.

Bulk Suite Runner

To run all suites in any directory below ./my_checks:

python -m my_checks

Controlling execution with environment variables

MAX_PARALLEL_SUITES Run up to this number of suites in parallel. Default is 1 (serial execution).

REPORTS_PATH Put reports and other artifacts in this directory. Default: ./reports

RUN_ALL_TIMEOUT After this number of seconds, terminate all running suites and return a non-zero exit code.

TEST_RUN_ID Attach this arbitrary string as a property to all events. This allows reporters to discriminate one test run from another.

Understanding events and reports

(or the philosophy of errors, failures, and warnings)

Questions Three follows the convention set by JUnit and draws an important distinction between error events and failure events. This distinction flows from the scaffolds to the Event Broker to the reports.

A failure event occurs when a check makes a false assertion. The simplest way to trigger one is assert False which Python converts to an AssertionError which the scaffold converts to a TEST_FAILED event. The intent of the system is to produce a failure event only when there is high confidence that there is a fault in the system under test.

An error event occurs when some other exception gets raised (or, for whatever batty reason, something other than a check raises an AssertionError). Depending on the context from which the exception was raised, the scaffold will convert it into a SUITE_ERRED or a TEST_ERRED event. In theory, an error event should indicate a fault in the check. In practice, the fault could be anywhere, especially if the system under test behaves in unanticipated ways.

Because of the expectation that failure events indicate faults in the checks and error events indicate faults in the system under test, the Event Logger reports failure events as warnings and error events as errors. The warning indicates that the check did its job perfectly and the fault was somewhere else. The error indicates that the fault is in the check. Of course, real life is not so clean.

Because the distinction originated from the JUnit world, Junit XML Reporter has no need to perform any interpretation. It reports failure events as failures and error events as errors.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for questions-three, version
Filename, size File type Python version Upload date Hashes
Filename, size questions_three- (60.2 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size questions-three- (46.5 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page