Skip to main content

Zope testing framework, including the testrunner script.

Project description

************
zope.testing
************

.. contents::

This package provides a number of testing frameworks. It includes a
flexible test runner, and supports both doctest and unittest.

cleanup.py
Provides a mixin class for cleaning up after tests that
make global changes.

doctest.py
Enhanced version of python's standard doctest.py.
Better test count (one per block instead of one per docstring).
See doctest.txt.

(We need to merge this with the standard doctest module.)

doctestunit.py
Provides a pprint function that always sorts dictionary entries
(pprint.pprint from the standard library doesn't sort very short ones,
sometimes causing test failures when the internal order changes).

formparser.py
An HTML parser that extracts form information.

This is intended to support functional tests that need to extract
information from HTML forms returned by the publisher.

See formparser.txt.

loggingsupport.py
Support for testing logging code

If you want to test that your code generates proper log output, you
can create and install a handler that collects output.

loghandler.py
Logging handler for tests that check logging output.

module.py
Lets a doctest pretend to be a Python module.

See module.txt.

renormalizing.py
Regular expression pattern normalizing output checker.
Useful for doctests.

server.py
Provides a simple HTTP server compatible with the zope.app.testing
functional testing API. Lets you interactively play with the system
under test. Helpful in debugging functional doctest failures.

setupstack.py
A simple framework for automating doctest set-up and tear-down.
See setupstack.txt.

testrunner
The test runner package. This is typically wrapped by a test.py script that
sets up options to run a particular set of tests.


Getting started
***************

zope.testing uses buildout. To start, run ``python bootstrap.py``. It will
create a number of directories and the ``bin/buildout`` script. Next, run
``bin/buildout``. It will create a test script for you. Now, run ``bin/test``
to run the zope.testing test suite.

zope.testing Changelog
**********************

3.8.7 (2010-01-26)
==================

- Downgraded the zope.testing.doctest deprecation warning into a
PendingDeprecationWarning.


3.8.6 (2009-12-23)
==================

- Added MANIFEST.in and reuploaded to fix broken 3.8.5 release on PyPI.


3.8.5 (2009-12-23)
==================

- Added DocFileSuite, DocTestSuite, debug_src and debug back BBB imports
back into zope.testing.doctestunit; apparently many packages still import
them from there!

- Made zope.testing.doctest and zope.testing.doctestunit emit deprecation
warnings: use the stdlib doctest instead.


3.8.4 (2009-12-18)
==================

- Fixed missing imports and undefined variables reported by pyflakes,
adding tests to exercise the blind spots.

- Cleaned up unused imports reported by pyflakes.

- Added two new options to generate randomly ordered list of tests and to
select a specific order of tests.

- RENormalizing checkers can be combined via ``+`` now:
``checker1 + checker2`` creates a checker with the transformations of both
checkers.

- Test fixes for Python 2.7.

3.8.3 (2009-09-21)
==================

- Avoid a split() call or we get test failures when running from a directory
with spaces in it.

- Fix testrunner behavior on Windows for -j2 (or greater) combined with -v
(or greater).

3.8.2 (2009-09-15)
==================

- Removing hotshot profiler when using Python 2.6. That makes zope.testing
compatible with Python 2.6


3.8.1 (2009-08-12)
==================

- Avoid hardcoding sys.argv[0] as script;
allow, for instance, Zope 2's `bin/instance test` (LP#407916).

- Produce a clear error message when a subprocess doesn't follow the
zope.testing.testrunner protocol (LP#407916).

- Do not unnecessarily squelch verbose output in a subprocess when there are
not multiple subprocesses.

- Do not unnecessarily batch subprocess output, which can stymie automated and
human processes for identifying hung tests.

- Include incremental output when there are multiple subprocesses and a
verbosity of -vv or greater is requested. This again is not batched,
supporting automated processes and humans looking for hung tests.


3.8.0 (2009-07-24)
==================

- Testrunner automatically picks up descendants of unittest.TestCase in test
modules, so you don't have to provide a test_suite() anymore.


3.7.7 (2009-07-15)
==================

- Clean up support for displaying tracebacks with supplements by turning it
into an always-enabled feature and making the dependency on zope.exceptions
explicit.

- Fix #251759: Test runner descended into directories that aren't Python
packages.

- Code cleanups.


3.7.6 (2009-07-02)
==================

- Add zope-testrunner console_scripts entry point. This exposes a
zope-testrunner binary with default installs allowing the testrunner to be
run from the command line.

3.7.5 (2009-06-08)
==================

- Fix bug when running subprocesses on Windows.

- The option REPORT_ONLY_FIRST_FAILURE (command line option "-1") is now
respected even when a doctest declares its own REPORTING_FLAGS, such as
REPORT_NDIFF.

- Fixed bug that broke readline with pdb when using doctest
(see http://bugs.python.org/issue5727).

- Made tests pass on Windows and Linux at the same time.


3.7.4 (2009-05-01)
==================

- Filenames of doctest examples now contain the line number and not
only the example number. So a stack trace in pdb tells the exact
line number of the current example. This fixes
https://bugs.launchpad.net/bugs/339813

- Colorization of doctest output correctly handles blank lines.


3.7.3 (2009-04-22)
==================

- Better deal with rogue threads by always exiting with status so even
spinning daemon threads won't block the runner from exiting. This deprecated
the ``--with-exit-status`` option.


3.7.2 (2009-04-13)
==================

- fix test failure on Python 2.4 because of slight difference in the way
coverage is reported (__init__ files with only a single comment line are now
not reported)
- fixed bug that caused the test runner to hang when running subprocesses (as a
result Python 2.3 is no longer supported).
- there is apparently a bug in Python 2.6 (related to
http://bugs.python.org/issue1303673) that causes the profile tests to fail.
- added explanitory notes to buildout.cfg about how to run the tests with
multiple versions of Python


3.7.1 (2008-10-17)
==================

- The setupstack temporary-directory support now properly handles
read-only files by making them writable before removing them.


3.7.0 (2008-09-22)
==================

- Added an alterate setuptools / distutils commands for running all tests
using our testrunner. See 'zope.testing.testrunner.eggsupport:ftest'.

- Added a setuptools-compatible test loader which skips tests with layers:
the testrunner used by 'setup.py test' doesn't know about them, and those
tests then fail. See 'zope.testing.testrunner.eggsupport:SkipLayers'.

- Added support for Jython, when a garbage collector call is sent.

- Added support to bootstrap on Jython.

- Fixed NameError in StartUpFailure.

- Open doctest files in universal mode, so that packages released in Windoes
can be tested in Linux, for example.


3.6.0 (2008/07/10)
==================

- Added -j option to parallel tests run in subprocesses.

- RENormalizer accepts plain Python callables.

- Added --slow-test option.

- Added --no-progress and --auto-progress options.

- Complete refactoring of the test runner into multiple code files and a more
modular (pipeline-like) architecture.

- Unified unit tests with the layer support by introducing a real unit test
layer.

- Added a doctest for ``zope.testing.module``. There were several bugs
that were fixed:

* ``README.txt`` was a really bad default argument for the module
name, as it is not a proper dotted name. The code would
immediately fail as it would look for the ``txt`` module in the
``README`` package. The default is now ``__main__``.

* The tearDown function did not clean up the ``__name__`` entry in the
global dictionary.

- Fix a bug that caused a SubprocessError to be generated if a subprocess
sent any output to stderr.

- Fix a bug that caused the unit tests to be skipped if run in a subprocess.


3.5.1 (2007/08/14)
==================

Bugs Fixed:
-----------

- Post-mortem debugging wasn't invoked for layer-setup failures.

3.5.0 (2007/07/19)
==================

New Features
------------

- The test runner now works on Python 2.5.

- Added support for cProfile.

- Added output colorizing (-c option).

- Added --hide-secondary-failures and --show-secondary-failures options
(https://bugs.launchpad.net/zope3/+bug/115454).

Bugs Fixed:
-----------

- Fix some problems with Unicode in doctests.

- Fix "Error reading from subprocess" errors on Unix-like systems.

3.4 (2007/03/29)
================

New Features
------------

- Added exit-with-status support (supports use with buildbot and
zc.recipe.testing)

- Added a small framework for automating set up and tear down of
doctest tests. See setupstack.txt.

Bugs Fixed:
-----------

- Fix testrunner-wo-source.txt and testrunner-errors.txt to run with a
read-only source tree.

3.0 (2006/09/20)
================

- Updated the doctest copy with text-file encoding support.

- Added logging-level support to loggingsuppport module.

- At verbosity-level 1, dots are not output continuously, without any
line breaks.

- Improved output when the inability to tear down a layer causes tests
to be run in a subprocess.

- Made zope.exception required only if the zope_tracebacks extra is
requested.

2.x.y (???)
===========

- Fix the test coverage. If a module, for example `interfaces`, was in an
ignored directory/package, then if a module of the same name existed in a
covered directory/package, then it was also ignored there, because the
ignore cache stored the result by module name and not the filename of the
module.

2.0 (2006/01/05)
================

- Corresponds to the version of the zope.testing package shipped as part of
the Zope 3.2.0 release.

Detailed Documentation
**********************

Test Runner
===========

The testrunner module is used to run automated tests defined using the
unittest framework. Its primary feature is that it *finds* tests by
searching directory trees. It doesn't require the manual
concatenation of specific test suites. It is highly customizable and
should be usable with any project. In addition to finding and running
tests, it provides the following additional features:

- Test filtering using specifications of:

o test packages within a larger tree

o regular expression patterns for test modules

o regular expression patterns for individual tests

- Organization of tests into levels and layers

Sometimes, tests take so long to run that you don't want to run them
on every run of the test runner. Tests can be defined at different
levels. The test runner can be configured to only run tests at a
specific level or below by default. Command-line options can be
used to specify a minimum level to use for a specific run, or to run
all tests. Individual tests or test suites can specify their level
via a 'level' attribute. where levels are integers increasing from 1.

Most tests are unit tests. They don't depend on other facilities, or
set up whatever dependencies they have. For larger applications,
it's useful to specify common facilities that a large number of
tests share. Making each test set up and and tear down these
facilities is both ineffecient and inconvenient. For this reason,
we've introduced the concept of layers, based on the idea of layered
application architectures. Software build for a layer should be
able to depend on the facilities of lower layers already being set
up. For example, Zope defines a component architecture. Much Zope
software depends on that architecture. We should be able to treat
the component architecture as a layer that we set up once and reuse.
Similarly, Zope application software should be able to depend on the
Zope application server without having to set it up in each test.

The test runner introduces test layers, which are objects that can
set up environments for tests within the layers to use. A layer is
set up before running the tests in it. Individual tests or test
suites can define a layer by defining a `layer` attribute, which is
a test layer.

- Reporting

- progress meter

- summaries of tests run

- Analysis of test execution

- post-mortem debugging of test failures

- memory leaks

- code coverage

- source analysis using pychecker

- memory errors

- execution times

- profiling

Simple Usage
============

The test runner consists of an importable module. The test runner is
used by providing scripts that import and invoke the `run` method from
the module. The `testrunner` module is controlled via command-line
options. Test scripts supply base and default options by supplying a
list of default command-line options that are processed before the
user-supplied command-line options are provided.

Typically, a test script does 2 things:

- Adds the directory containing the zope package to the Python
path.

- Calls the test runner with default arguments and arguments supplied
to the script.

Normally, it just passes default/setup arguments. The test runner
uses `sys.argv` to get the user's input.

This testrunner_ex subdirectory contains a number of sample packages
with tests. Let's run the tests found here. First though, we'll set
up our default options:

>>> import os.path
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

The default options are used by a script to customize the test runner
for a particular application. In this case, we use two options:

path
Set the path where the test runner should look for tests. This path
is also added to the Python path.

tests-pattern
Tell the test runner how to recognize modules or packages containing
tests.

Now, if we run the tests, without any other options:

>>> from zope.testing import testrunner
>>> import sys
>>> sys.argv = ['test']
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
Set up samplelayers.Layer1 in N.NNN seconds.
Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
Set up samplelayers.Layer11 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer111 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
Tear down samplelayers.Layer111 in N.NNN seconds.
Set up samplelayers.Layer112 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
Tear down samplelayers.Layer112 in N.NNN seconds.
Tear down samplelayers.Layerx in N.NNN seconds.
Tear down samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer12 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
Set up samplelayers.Layer121 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
Tear down samplelayers.Layer121 in N.NNN seconds.
Set up samplelayers.Layer122 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Tear down samplelayers.Layer122 in N.NNN seconds.
Tear down samplelayers.Layer12 in N.NNN seconds.
Tear down samplelayers.Layer1 in N.NNN seconds.
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
False

we see the normal testrunner output, which summarizes the tests run for
each layer. For each layer, we see what layers had to be torn down or
set up to run the layer and we see the number of tests run, with
results.

The test runner returns a boolean indicating whether there were
errors. In this example, there were no errors, so it returned False.

(Of course, the times shown in these examples are just examples.
Times will vary depending on system speed.)

Layers
======

A Layer is an object providing setup and teardown methods used to setup
and teardown the environment provided by the layer. It may also provide
setup and teardown methods used to reset the environment provided by the
layer between each test.

Layers are generally implemented as classes using class methods.

>>> class BaseLayer:
... def setUp(cls):
... log('BaseLayer.setUp')
... setUp = classmethod(setUp)
...
... def tearDown(cls):
... log('BaseLayer.tearDown')
... tearDown = classmethod(tearDown)
...
... def testSetUp(cls):
... log('BaseLayer.testSetUp')
... testSetUp = classmethod(testSetUp)
...
... def testTearDown(cls):
... log('BaseLayer.testTearDown')
... testTearDown = classmethod(testTearDown)
...

Layers can extend other layers. Note that they do not explicitly
invoke the setup and teardown methods of other layers - the test runner
does this for us in order to minimize the number of invocations.

>>> class TopLayer(BaseLayer):
... def setUp(cls):
... log('TopLayer.setUp')
... setUp = classmethod(setUp)
...
... def tearDown(cls):
... log('TopLayer.tearDown')
... tearDown = classmethod(tearDown)
...
... def testSetUp(cls):
... log('TopLayer.testSetUp')
... testSetUp = classmethod(testSetUp)
...
... def testTearDown(cls):
... log('TopLayer.testTearDown')
... testTearDown = classmethod(testTearDown)
...

Tests or test suites specify what layer they need by storing a reference
in the 'layer' attribute.

>>> import unittest
>>> class TestSpecifyingBaseLayer(unittest.TestCase):
... 'This TestCase explicitly specifies its layer'
... layer = BaseLayer
... name = 'TestSpecifyingBaseLayer' # For testing only
...
... def setUp(self):
... log('TestSpecifyingBaseLayer.setUp')
...
... def tearDown(self):
... log('TestSpecifyingBaseLayer.tearDown')
...
... def test1(self):
... log('TestSpecifyingBaseLayer.test1')
...
... def test2(self):
... log('TestSpecifyingBaseLayer.test2')
...
>>> class TestSpecifyingNoLayer(unittest.TestCase):
... 'This TestCase specifies no layer'
... name = 'TestSpecifyingNoLayer' # For testing only
... def setUp(self):
... log('TestSpecifyingNoLayer.setUp')
...
... def tearDown(self):
... log('TestSpecifyingNoLayer.tearDown')
...
... def test1(self):
... log('TestSpecifyingNoLayer.test')
...
... def test2(self):
... log('TestSpecifyingNoLayer.test')
...

Create a TestSuite containing two test suites, one for each of
TestSpecifyingBaseLayer and TestSpecifyingNoLayer.

>>> umbrella_suite = unittest.TestSuite()
>>> umbrella_suite.addTest(unittest.makeSuite(TestSpecifyingBaseLayer))
>>> no_layer_suite = unittest.makeSuite(TestSpecifyingNoLayer)
>>> umbrella_suite.addTest(no_layer_suite)

Before we can run the tests, we need to setup some helpers.

>>> from zope.testing.testrunner import options
>>> from zope.testing.loggingsupport import InstalledHandler
>>> import logging
>>> log_handler = InstalledHandler('zope.testing.tests')
>>> def log(msg):
... logging.getLogger('zope.testing.tests').info(msg)
>>> def fresh_options():
... opts = options.get_options(['--test-filter', '.*'])
... opts.resume_layer = None
... opts.resume_number = 0
... return opts

Now we run the tests. Note that the BaseLayer was not setup when
the TestSpecifyingNoLayer was run and setup/torn down around the
TestSpecifyingBaseLayer tests.

>>> from zope.testing.testrunner.runner import Runner
>>> runner = Runner(options=fresh_options(), args=[],
found_suites=[umbrella_suite])
>>> succeeded = runner.run()
Running BaseLayer tests:
Set up BaseLayer in N.NNN seconds.
Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Tear down BaseLayer in N.NNN seconds.
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.


Now lets specify a layer in the suite containing TestSpecifyingNoLayer
and run the tests again. This demonstrates the other method of specifying
a layer. This is generally how you specify what layer doctests need.

>>> no_layer_suite.layer = BaseLayer
>>> runner = Runner(options=fresh_options(), args=[],
found_suites=[umbrella_suite])
>>> succeeded = runner.run()
Running BaseLayer tests:
Set up BaseLayer in N.NNN seconds.
Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down BaseLayer in N.NNN seconds.

Clear our logged output, as we want to inspect it shortly.

>>> log_handler.clear()

Now lets also specify a layer in the TestSpecifyingNoLayer class and rerun
the tests. This demonstrates that the most specific layer is used. It also
shows the behavior of nested layers - because TopLayer extends BaseLayer,
both the BaseLayer and TopLayer environments are setup when the
TestSpecifyingNoLayer tests are run.

>>> TestSpecifyingNoLayer.layer = TopLayer
>>> runner = Runner(options=fresh_options(), args=[],
found_suites=[umbrella_suite])
>>> succeeded = runner.run()
Running BaseLayer tests:
Set up BaseLayer in N.NNN seconds.
Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Running TopLayer tests:
Set up TopLayer in N.NNN seconds.
Ran 2 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down TopLayer in N.NNN seconds.
Tear down BaseLayer in N.NNN seconds.
Total: 4 tests, 0 failures, 0 errors in N.NNN seconds.


If we inspect our trace of what methods got called in what order, we can
see that the layer setup and teardown methods only got called once. We can
also see that the layer's test setup and teardown methods got called for
each test using that layer in the right order.

>>> def report():
... for record in log_handler.records:
... print record.getMessage()
>>> report()
BaseLayer.setUp
BaseLayer.testSetUp
TestSpecifyingBaseLayer.setUp
TestSpecifyingBaseLayer.test1
TestSpecifyingBaseLayer.tearDown
BaseLayer.testTearDown
BaseLayer.testSetUp
TestSpecifyingBaseLayer.setUp
TestSpecifyingBaseLayer.test2
TestSpecifyingBaseLayer.tearDown
BaseLayer.testTearDown
TopLayer.setUp
BaseLayer.testSetUp
TopLayer.testSetUp
TestSpecifyingNoLayer.setUp
TestSpecifyingNoLayer.test
TestSpecifyingNoLayer.tearDown
TopLayer.testTearDown
BaseLayer.testTearDown
BaseLayer.testSetUp
TopLayer.testSetUp
TestSpecifyingNoLayer.setUp
TestSpecifyingNoLayer.test
TestSpecifyingNoLayer.tearDown
TopLayer.testTearDown
BaseLayer.testTearDown
TopLayer.tearDown
BaseLayer.tearDown

Now lets stack a few more layers to ensure that our setUp and tearDown
methods are called in the correct order.

>>> from zope.testing.testrunner.find import name_from_layer
>>> class A(object):
... def setUp(cls):
... log('%s.setUp' % name_from_layer(cls))
... setUp = classmethod(setUp)
...
... def tearDown(cls):
... log('%s.tearDown' % name_from_layer(cls))
... tearDown = classmethod(tearDown)
...
... def testSetUp(cls):
... log('%s.testSetUp' % name_from_layer(cls))
... testSetUp = classmethod(testSetUp)
...
... def testTearDown(cls):
... log('%s.testTearDown' % name_from_layer(cls))
... testTearDown = classmethod(testTearDown)
...
>>> class B(A): pass
>>> class C(B): pass
>>> class D(A): pass
>>> class E(D): pass
>>> class F(C,E): pass

>>> class DeepTest(unittest.TestCase):
... layer = F
... def test(self):
... pass
>>> suite = unittest.makeSuite(DeepTest)
>>> log_handler.clear()
>>> runner = Runner(options=fresh_options(), args=[], found_suites=[suite])
>>> succeeded = runner.run()
Running F tests:
Set up A in N.NNN seconds.
Set up B in N.NNN seconds.
Set up C in N.NNN seconds.
Set up D in N.NNN seconds.
Set up E in N.NNN seconds.
Set up F in N.NNN seconds.
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down F in N.NNN seconds.
Tear down E in N.NNN seconds.
Tear down D in N.NNN seconds.
Tear down C in N.NNN seconds.
Tear down B in N.NNN seconds.
Tear down A in N.NNN seconds.


>>> report()
A.setUp
B.setUp
C.setUp
D.setUp
E.setUp
F.setUp
A.testSetUp
B.testSetUp
C.testSetUp
D.testSetUp
E.testSetUp
F.testSetUp
F.testTearDown
E.testTearDown
D.testTearDown
C.testTearDown
B.testTearDown
A.testTearDown
F.tearDown
E.tearDown
D.tearDown
C.tearDown
B.tearDown
A.tearDown

Layer Selection
===============

We can select which layers to run using the --layer option:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = 'test --layer 112 --layer Unit'.split()
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer112 tests:
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer112 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Tear down samplelayers.Layer112 in N.NNN seconds.
Tear down samplelayers.Layerx in N.NNN seconds.
Tear down samplelayers.Layer11 in N.NNN seconds.
Tear down samplelayers.Layer1 in N.NNN seconds.
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Total: 226 tests, 0 failures, 0 errors in N.NNN seconds.
False


We can also specify that we want to run only the unit tests:

>>> sys.argv = 'test -u'.split()
>>> testrunner.run_internal(defaults)
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


Or that we want to run all of the tests except for the unit tests:

>>> sys.argv = 'test -f'.split()
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
Set up samplelayers.Layer1 in N.NNN seconds.
Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
Set up samplelayers.Layer11 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer111 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
Tear down samplelayers.Layer111 in N.NNN seconds.
Set up samplelayers.Layer112 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
Tear down samplelayers.Layer112 in N.NNN seconds.
Tear down samplelayers.Layerx in N.NNN seconds.
Tear down samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer12 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
Set up samplelayers.Layer121 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
Tear down samplelayers.Layer121 in N.NNN seconds.
Set up samplelayers.Layer122 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in N.NNN seconds.
Tear down samplelayers.Layer12 in N.NNN seconds.
Tear down samplelayers.Layer1 in N.NNN seconds.
Total: 213 tests, 0 failures, 0 errors in N.NNN seconds.
False

Or we can explicitly say that we want both unit and non-unit tests.

>>> sys.argv = 'test -uf'.split()
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
Set up samplelayers.Layer1 in N.NNN seconds.
Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
Set up samplelayers.Layer11 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer111 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
Tear down samplelayers.Layer111 in N.NNN seconds.
Set up samplelayers.Layer112 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
Tear down samplelayers.Layer112 in N.NNN seconds.
Tear down samplelayers.Layerx in N.NNN seconds.
Tear down samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer12 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
Set up samplelayers.Layer121 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
Tear down samplelayers.Layer121 in N.NNN seconds.
Set up samplelayers.Layer122 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Tear down samplelayers.Layer122 in N.NNN seconds.
Tear down samplelayers.Layer12 in N.NNN seconds.
Tear down samplelayers.Layer1 in N.NNN seconds.
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
False

It is possible to force the layers to run in subprocesses and parallelize them.

>>> sys.argv = [testrunner_script, '-j2']
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
Set up samplelayers.Layer1 in N.NNN seconds.
Ran 9 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer11 tests:
Running in a subprocess.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer11 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer111 tests:
Running in a subprocess.
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer111 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer112 tests:
Running in a subprocess.
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer112 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer12 tests:
Running in a subprocess.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer12 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer121 tests:
Running in a subprocess.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer12 in N.NNN seconds.
Set up samplelayers.Layer121 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running samplelayers.Layer122 tests:
Running in a subprocess.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer12 in N.NNN seconds.
Set up samplelayers.Layer122 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Running in a subprocess.
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Total: 405 tests, 0 failures, 0 errors in N.NNN seconds.
False

Passing arguments explicitly
============================

In most of the examples here, we set up `sys.argv`. In normal usage,
the testrunner just uses `sys.argv`. It is possible to pass arguments
explicitly.

>>> import os.path
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults, 'test --layer 111'.split())
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in N.NNN seconds.
Set up samplelayers.Layer1 in N.NNN seconds.
Set up samplelayers.Layer11 in N.NNN seconds.
Set up samplelayers.Layer111 in N.NNN seconds.
Ran 34 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down samplelayers.Layer111 in N.NNN seconds.
Tear down samplelayers.Layerx in N.NNN seconds.
Tear down samplelayers.Layer11 in N.NNN seconds.
Tear down samplelayers.Layer1 in N.NNN seconds.
False

If options already have default values, then passing a different default will
override.

For example, --list-tests defaults to being turned off, but if we pass in a
different default, that one takes effect.

>>> defaults = [
... '--list-tests',
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults, 'test --layer 111'.split())
Listing samplelayers.Layer111 tests:
test_x1 (sample1.sampletests.test111.TestA)
test_y0 (sample1.sampletests.test111.TestA)
test_z0 (sample1.sampletests.test111.TestA)
test_x0 (sample1.sampletests.test111.TestB)
test_y1 (sample1.sampletests.test111.TestB)
test_z0 (sample1.sampletests.test111.TestB)
test_1 (sample1.sampletests.test111.TestNotMuch)
test_2 (sample1.sampletests.test111.TestNotMuch)
test_3 (sample1.sampletests.test111.TestNotMuch)
test_x0 (sample1.sampletests.test111)
test_y0 (sample1.sampletests.test111)
test_z1 (sample1.sampletests.test111)

/home/benji/workspace/zope.testing/1/src/zope/testing/testrunner/testrunner-
ex/sample1/sampletests/../../sampletestsl.txt
test_x1 (sampletests.test111.TestA)
test_y0 (sampletests.test111.TestA)
test_z0 (sampletests.test111.TestA)
test_x0 (sampletests.test111.TestB)
test_y1 (sampletests.test111.TestB)
test_z0 (sampletests.test111.TestB)
test_1 (sampletests.test111.TestNotMuch)
test_2 (sampletests.test111.TestNotMuch)
test_3 (sampletests.test111.TestNotMuch)
test_x0 (sampletests.test111)
test_y0 (sampletests.test111)
test_z1 (sampletests.test111)

/home/benji/workspace/zope.testing/1/src/zope/testing/testrunner/testrunner-
ex/sampletests/../sampletestsl.txt
False

Verbose Output
==============

Normally, we just get a summary. We can use the -v option to get
increasingly more information.

If we use a single --verbose (-v) option, we get a dot printed for each
test:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]
>>> sys.argv = 'test --layer 122 -v'.split()
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
..................................
Ran 34 tests with 0 failures and 0 errors in 0.007 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

If there are more than 50 tests, the dots are printed in groups of
50:

>>> sys.argv = 'test -uv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:

................................................................................
................................................................................
................................
Ran 192 tests with 0 failures and 0 errors in 0.035 seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False

If the --verbose (-v) option is used twice, then the name and location of
each test is printed as it is run:

>>> sys.argv = 'test --layer 122 -vv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
test_x1 (sample1.sampletests.test122.TestA)
test_y0 (sample1.sampletests.test122.TestA)
test_z0 (sample1.sampletests.test122.TestA)
test_x0 (sample1.sampletests.test122.TestB)
test_y1 (sample1.sampletests.test122.TestB)
test_z0 (sample1.sampletests.test122.TestB)
test_1 (sample1.sampletests.test122.TestNotMuch)
test_2 (sample1.sampletests.test122.TestNotMuch)
test_3 (sample1.sampletests.test122.TestNotMuch)
test_x0 (sample1.sampletests.test122)
test_y0 (sample1.sampletests.test122)
test_z1 (sample1.sampletests.test122)
testrunner-ex/sample1/sampletests/../../sampletestsl.txt
test_x1 (sampletests.test122.TestA)
test_y0 (sampletests.test122.TestA)
test_z0 (sampletests.test122.TestA)
test_x0 (sampletests.test122.TestB)
test_y1 (sampletests.test122.TestB)
test_z0 (sampletests.test122.TestB)
test_1 (sampletests.test122.TestNotMuch)
test_2 (sampletests.test122.TestNotMuch)
test_3 (sampletests.test122.TestNotMuch)
test_x0 (sampletests.test122)
test_y0 (sampletests.test122)
test_z1 (sampletests.test122)
testrunner-ex/sampletests/../sampletestsl.txt
Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

if the --verbose (-v) option is used three times, then individual
test-execution times are printed:

>>> sys.argv = 'test --layer 122 -vvv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
test_x1 (sample1.sampletests.test122.TestA) (0.000 s)
test_y0 (sample1.sampletests.test122.TestA) (0.000 s)
test_z0 (sample1.sampletests.test122.TestA) (0.000 s)
test_x0 (sample1.sampletests.test122.TestB) (0.000 s)
test_y1 (sample1.sampletests.test122.TestB) (0.000 s)
test_z0 (sample1.sampletests.test122.TestB) (0.000 s)
test_1 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
test_2 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
test_3 (sample1.sampletests.test122.TestNotMuch) (0.000 s)
test_x0 (sample1.sampletests.test122) (0.001 s)
test_y0 (sample1.sampletests.test122) (0.001 s)
test_z1 (sample1.sampletests.test122) (0.001 s)
testrunner-ex/sample1/sampletests/../../sampletestsl.txt (0.001 s)
test_x1 (sampletests.test122.TestA) (0.000 s)
test_y0 (sampletests.test122.TestA) (0.000 s)
test_z0 (sampletests.test122.TestA) (0.000 s)
test_x0 (sampletests.test122.TestB) (0.000 s)
test_y1 (sampletests.test122.TestB) (0.000 s)
test_z0 (sampletests.test122.TestB) (0.000 s)
test_1 (sampletests.test122.TestNotMuch) (0.000 s)
test_2 (sampletests.test122.TestNotMuch) (0.000 s)
test_3 (sampletests.test122.TestNotMuch) (0.000 s)
test_x0 (sampletests.test122) (0.001 s)
test_y0 (sampletests.test122) (0.001 s)
test_z1 (sampletests.test122) (0.001 s)
testrunner-ex/sampletests/../sampletestsl.txt (0.001 s)
Ran 34 tests with 0 failures and 0 errors in 0.009 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

Quiet output
------------

The --quiet (-q) option cancels all verbose options. It's useful when
the default verbosity is non-zero:

>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... '-v'
... ]
>>> sys.argv = 'test -q -u'.split()
>>> testrunner.run_internal(defaults)
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 192 tests with 0 failures and 0 errors in 0.034 seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False

Test Selection
==============

We've already seen that we can select tests by layer. There are three
other ways we can select tests. We can select tests by package:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = 'test --layer 122 -ssample1 -vv'.split()
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
test_x1 (sample1.sampletests.test122.TestA)
test_y0 (sample1.sampletests.test122.TestA)
test_z0 (sample1.sampletests.test122.TestA)
test_x0 (sample1.sampletests.test122.TestB)
test_y1 (sample1.sampletests.test122.TestB)
test_z0 (sample1.sampletests.test122.TestB)
test_1 (sample1.sampletests.test122.TestNotMuch)
test_2 (sample1.sampletests.test122.TestNotMuch)
test_3 (sample1.sampletests.test122.TestNotMuch)
test_x0 (sample1.sampletests.test122)
test_y0 (sample1.sampletests.test122)
test_z1 (sample1.sampletests.test122)
testrunner-ex/sample1/sampletests/../../sampletestsl.txt
Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

You can specify multiple packages:

>>> sys.argv = 'test -u -vv -ssample1 -ssample2'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_x1 (sample1.sampletestsf.TestA)
test_y0 (sample1.sampletestsf.TestA)
test_z0 (sample1.sampletestsf.TestA)
test_x0 (sample1.sampletestsf.TestB)
test_y1 (sample1.sampletestsf.TestB)
test_z0 (sample1.sampletestsf.TestB)
test_1 (sample1.sampletestsf.TestNotMuch)
test_2 (sample1.sampletestsf.TestNotMuch)
test_3 (sample1.sampletestsf.TestNotMuch)
test_x0 (sample1.sampletestsf)
test_y0 (sample1.sampletestsf)
test_z1 (sample1.sampletestsf)
testrunner-ex/sample1/../sampletests.txt
test_x1 (sample1.sample11.sampletests.TestA)
test_y0 (sample1.sample11.sampletests.TestA)
test_z0 (sample1.sample11.sampletests.TestA)
test_x0 (sample1.sample11.sampletests.TestB)
test_y1 (sample1.sample11.sampletests.TestB)
test_z0 (sample1.sample11.sampletests.TestB)
test_1 (sample1.sample11.sampletests.TestNotMuch)
test_2 (sample1.sample11.sampletests.TestNotMuch)
test_3 (sample1.sample11.sampletests.TestNotMuch)
test_x0 (sample1.sample11.sampletests)
test_y0 (sample1.sample11.sampletests)
test_z1 (sample1.sample11.sampletests)
testrunner-ex/sample1/sample11/../../sampletests.txt
test_x1 (sample1.sample13.sampletests.TestA)
test_y0 (sample1.sample13.sampletests.TestA)
test_z0 (sample1.sample13.sampletests.TestA)
test_x0 (sample1.sample13.sampletests.TestB)
test_y1 (sample1.sample13.sampletests.TestB)
test_z0 (sample1.sample13.sampletests.TestB)
test_1 (sample1.sample13.sampletests.TestNotMuch)
test_2 (sample1.sample13.sampletests.TestNotMuch)
test_3 (sample1.sample13.sampletests.TestNotMuch)
test_x0 (sample1.sample13.sampletests)
test_y0 (sample1.sample13.sampletests)
test_z1 (sample1.sample13.sampletests)
testrunner-ex/sample1/sample13/../../sampletests.txt
test_x1 (sample1.sampletests.test1.TestA)
test_y0 (sample1.sampletests.test1.TestA)
test_z0 (sample1.sampletests.test1.TestA)
test_x0 (sample1.sampletests.test1.TestB)
test_y1 (sample1.sampletests.test1.TestB)
test_z0 (sample1.sampletests.test1.TestB)
test_1 (sample1.sampletests.test1.TestNotMuch)
test_2 (sample1.sampletests.test1.TestNotMuch)
test_3 (sample1.sampletests.test1.TestNotMuch)
test_x0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test1)
test_z1 (sample1.sampletests.test1)
testrunner-ex/sample1/sampletests/../../sampletests.txt
test_x1 (sample1.sampletests.test_one.TestA)
test_y0 (sample1.sampletests.test_one.TestA)
test_z0 (sample1.sampletests.test_one.TestA)
test_x0 (sample1.sampletests.test_one.TestB)
test_y1 (sample1.sampletests.test_one.TestB)
test_z0 (sample1.sampletests.test_one.TestB)
test_1 (sample1.sampletests.test_one.TestNotMuch)
test_2 (sample1.sampletests.test_one.TestNotMuch)
test_3 (sample1.sampletests.test_one.TestNotMuch)
test_x0 (sample1.sampletests.test_one)
test_y0 (sample1.sampletests.test_one)
test_z1 (sample1.sampletests.test_one)
testrunner-ex/sample1/sampletests/../../sampletests.txt
test_x1 (sample2.sample21.sampletests.TestA)
test_y0 (sample2.sample21.sampletests.TestA)
test_z0 (sample2.sample21.sampletests.TestA)
test_x0 (sample2.sample21.sampletests.TestB)
test_y1 (sample2.sample21.sampletests.TestB)
test_z0 (sample2.sample21.sampletests.TestB)
test_1 (sample2.sample21.sampletests.TestNotMuch)
test_2 (sample2.sample21.sampletests.TestNotMuch)
test_3 (sample2.sample21.sampletests.TestNotMuch)
test_x0 (sample2.sample21.sampletests)
test_y0 (sample2.sample21.sampletests)
test_z1 (sample2.sample21.sampletests)
testrunner-ex/sample2/sample21/../../sampletests.txt
test_x1 (sample2.sampletests.test_1.TestA)
test_y0 (sample2.sampletests.test_1.TestA)
test_z0 (sample2.sampletests.test_1.TestA)
test_x0 (sample2.sampletests.test_1.TestB)
test_y1 (sample2.sampletests.test_1.TestB)
test_z0 (sample2.sampletests.test_1.TestB)
test_1 (sample2.sampletests.test_1.TestNotMuch)
test_2 (sample2.sampletests.test_1.TestNotMuch)
test_3 (sample2.sampletests.test_1.TestNotMuch)
test_x0 (sample2.sampletests.test_1)
test_y0 (sample2.sampletests.test_1)
test_z1 (sample2.sampletests.test_1)
testrunner-ex/sample2/sampletests/../../sampletests.txt
test_x1 (sample2.sampletests.testone.TestA)
test_y0 (sample2.sampletests.testone.TestA)
test_z0 (sample2.sampletests.testone.TestA)
test_x0 (sample2.sampletests.testone.TestB)
test_y1 (sample2.sampletests.testone.TestB)
test_z0 (sample2.sampletests.testone.TestB)
test_1 (sample2.sampletests.testone.TestNotMuch)
test_2 (sample2.sampletests.testone.TestNotMuch)
test_3 (sample2.sampletests.testone.TestNotMuch)
test_x0 (sample2.sampletests.testone)
test_y0 (sample2.sampletests.testone)
test_z1 (sample2.sampletests.testone)
testrunner-ex/sample2/sampletests/../../sampletests.txt
Ran 128 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False

You can specify directory names instead of packages (useful for
tab-completion):

>>> subdir = os.path.join(directory_with_tests, 'sample1')
>>> sys.argv = ['test', '--layer', '122', '-s', subdir, '-vv']
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
test_x1 (sample1.sampletests.test122.TestA)
test_y0 (sample1.sampletests.test122.TestA)
test_z0 (sample1.sampletests.test122.TestA)
test_x0 (sample1.sampletests.test122.TestB)
test_y1 (sample1.sampletests.test122.TestB)
test_z0 (sample1.sampletests.test122.TestB)
test_1 (sample1.sampletests.test122.TestNotMuch)
test_2 (sample1.sampletests.test122.TestNotMuch)
test_3 (sample1.sampletests.test122.TestNotMuch)
test_x0 (sample1.sampletests.test122)
test_y0 (sample1.sampletests.test122)
test_z1 (sample1.sampletests.test122)
testrunner-ex/sample1/sampletests/../../sampletestsl.txt
Ran 17 tests with 0 failures and 0 errors in 0.005 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

We can select by test module name using the --module (-m) option:

>>> sys.argv = 'test -u -vv -ssample1 -m_one -mtest1'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_x1 (sample1.sampletests.test1.TestA)
test_y0 (sample1.sampletests.test1.TestA)
test_z0 (sample1.sampletests.test1.TestA)
test_x0 (sample1.sampletests.test1.TestB)
test_y1 (sample1.sampletests.test1.TestB)
test_z0 (sample1.sampletests.test1.TestB)
test_1 (sample1.sampletests.test1.TestNotMuch)
test_2 (sample1.sampletests.test1.TestNotMuch)
test_3 (sample1.sampletests.test1.TestNotMuch)
test_x0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test1)
test_z1 (sample1.sampletests.test1)
testrunner-ex/sample1/sampletests/../../sampletests.txt
test_x1 (sample1.sampletests.test_one.TestA)
test_y0 (sample1.sampletests.test_one.TestA)
test_z0 (sample1.sampletests.test_one.TestA)
test_x0 (sample1.sampletests.test_one.TestB)
test_y1 (sample1.sampletests.test_one.TestB)
test_z0 (sample1.sampletests.test_one.TestB)
test_1 (sample1.sampletests.test_one.TestNotMuch)
test_2 (sample1.sampletests.test_one.TestNotMuch)
test_3 (sample1.sampletests.test_one.TestNotMuch)
test_x0 (sample1.sampletests.test_one)
test_y0 (sample1.sampletests.test_one)
test_z1 (sample1.sampletests.test_one)
testrunner-ex/sample1/sampletests/../../sampletests.txt
Ran 32 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


and by test within the module using the --test (-t) option:

>>> sys.argv = 'test -u -vv -ssample1 -m_one -mtest1 -tx0 -ty0'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_y0 (sample1.sampletests.test1.TestA)
test_x0 (sample1.sampletests.test1.TestB)
test_x0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test_one.TestA)
test_x0 (sample1.sampletests.test_one.TestB)
test_x0 (sample1.sampletests.test_one)
test_y0 (sample1.sampletests.test_one)
Ran 8 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


>>> sys.argv = 'test -u -vv -ssample1 -ttxt'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
testrunner-ex/sample1/../sampletests.txt
testrunner-ex/sample1/sample11/../../sampletests.txt
testrunner-ex/sample1/sample13/../../sampletests.txt
testrunner-ex/sample1/sampletests/../../sampletests.txt
testrunner-ex/sample1/sampletests/../../sampletests.txt
Ran 20 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


The --module and --test options take regular expressions. If the
regular expressions specified begin with '!', then tests that don't
match the regular expression are selected:

>>> sys.argv = 'test -u -vv -ssample1 -m!sample1[.]sample1'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_x1 (sample1.sampletestsf.TestA)
test_y0 (sample1.sampletestsf.TestA)
test_z0 (sample1.sampletestsf.TestA)
test_x0 (sample1.sampletestsf.TestB)
test_y1 (sample1.sampletestsf.TestB)
test_z0 (sample1.sampletestsf.TestB)
test_1 (sample1.sampletestsf.TestNotMuch)
test_2 (sample1.sampletestsf.TestNotMuch)
test_3 (sample1.sampletestsf.TestNotMuch)
test_x0 (sample1.sampletestsf)
test_y0 (sample1.sampletestsf)
test_z1 (sample1.sampletestsf)
testrunner-ex/sample1/../sampletests.txt
test_x1 (sample1.sampletests.test1.TestA)
test_y0 (sample1.sampletests.test1.TestA)
test_z0 (sample1.sampletests.test1.TestA)
test_x0 (sample1.sampletests.test1.TestB)
test_y1 (sample1.sampletests.test1.TestB)
test_z0 (sample1.sampletests.test1.TestB)
test_1 (sample1.sampletests.test1.TestNotMuch)
test_2 (sample1.sampletests.test1.TestNotMuch)
test_3 (sample1.sampletests.test1.TestNotMuch)
test_x0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test1)
test_z1 (sample1.sampletests.test1)
testrunner-ex/sample1/sampletests/../../sampletests.txt
test_x1 (sample1.sampletests.test_one.TestA)
test_y0 (sample1.sampletests.test_one.TestA)
test_z0 (sample1.sampletests.test_one.TestA)
test_x0 (sample1.sampletests.test_one.TestB)
test_y1 (sample1.sampletests.test_one.TestB)
test_z0 (sample1.sampletests.test_one.TestB)
test_1 (sample1.sampletests.test_one.TestNotMuch)
test_2 (sample1.sampletests.test_one.TestNotMuch)
test_3 (sample1.sampletests.test_one.TestNotMuch)
test_x0 (sample1.sampletests.test_one)
test_y0 (sample1.sampletests.test_one)
test_z1 (sample1.sampletests.test_one)
testrunner-ex/sample1/sampletests/../../sampletests.txt
Ran 48 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


Module and test filters can also be given as positional arguments:


>>> sys.argv = 'test -u -vv -ssample1 !sample1[.]sample1'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_x1 (sample1.sampletestsf.TestA)
test_y0 (sample1.sampletestsf.TestA)
test_z0 (sample1.sampletestsf.TestA)
test_x0 (sample1.sampletestsf.TestB)
test_y1 (sample1.sampletestsf.TestB)
test_z0 (sample1.sampletestsf.TestB)
test_1 (sample1.sampletestsf.TestNotMuch)
test_2 (sample1.sampletestsf.TestNotMuch)
test_3 (sample1.sampletestsf.TestNotMuch)
test_x0 (sample1.sampletestsf)
test_y0 (sample1.sampletestsf)
test_z1 (sample1.sampletestsf)
testrunner-ex/sample1/../sampletests.txt
test_x1 (sample1.sampletests.test1.TestA)
test_y0 (sample1.sampletests.test1.TestA)
test_z0 (sample1.sampletests.test1.TestA)
test_x0 (sample1.sampletests.test1.TestB)
test_y1 (sample1.sampletests.test1.TestB)
test_z0 (sample1.sampletests.test1.TestB)
test_1 (sample1.sampletests.test1.TestNotMuch)
test_2 (sample1.sampletests.test1.TestNotMuch)
test_3 (sample1.sampletests.test1.TestNotMuch)
test_x0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test1)
test_z1 (sample1.sampletests.test1)
testrunner-ex/sample1/sampletests/../../sampletests.txt
test_x1 (sample1.sampletests.test_one.TestA)
test_y0 (sample1.sampletests.test_one.TestA)
test_z0 (sample1.sampletests.test_one.TestA)
test_x0 (sample1.sampletests.test_one.TestB)
test_y1 (sample1.sampletests.test_one.TestB)
test_z0 (sample1.sampletests.test_one.TestB)
test_1 (sample1.sampletests.test_one.TestNotMuch)
test_2 (sample1.sampletests.test_one.TestNotMuch)
test_3 (sample1.sampletests.test_one.TestNotMuch)
test_x0 (sample1.sampletests.test_one)
test_y0 (sample1.sampletests.test_one)
test_z1 (sample1.sampletests.test_one)
testrunner-ex/sample1/sampletests/../../sampletests.txt
Ran 48 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


>>> sys.argv = 'test -u -vv -ssample1 . txt'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
testrunner-ex/sample1/../sampletests.txt
testrunner-ex/sample1/sample11/../../sampletests.txt
testrunner-ex/sample1/sample13/../../sampletests.txt
testrunner-ex/sample1/sampletests/../../sampletests.txt
testrunner-ex/sample1/sampletests/../../sampletests.txt
Ran 20 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False

Sometimes, There are tests that you don't want to run by default.
For example, you might have tests that take a long time. Tests can
have a level attribute. If no level is specified, a level of 1 is
assumed and, by default, only tests at level one are run. to run
tests at a higher level, use the --at-level (-a) option to specify a higher
level. For example, with the following options:


>>> sys.argv = 'test -u -vv -t test_y1 -t test_y0'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_y0 (sampletestsf.TestA)
test_y1 (sampletestsf.TestB)
test_y0 (sampletestsf)
test_y0 (sample1.sampletestsf.TestA)
test_y1 (sample1.sampletestsf.TestB)
test_y0 (sample1.sampletestsf)
test_y0 (sample1.sample11.sampletests.TestA)
test_y1 (sample1.sample11.sampletests.TestB)
test_y0 (sample1.sample11.sampletests)
test_y0 (sample1.sample13.sampletests.TestA)
test_y1 (sample1.sample13.sampletests.TestB)
test_y0 (sample1.sample13.sampletests)
test_y0 (sample1.sampletests.test1.TestA)
test_y1 (sample1.sampletests.test1.TestB)
test_y0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test_one.TestA)
test_y1 (sample1.sampletests.test_one.TestB)
test_y0 (sample1.sampletests.test_one)
test_y0 (sample2.sample21.sampletests.TestA)
test_y1 (sample2.sample21.sampletests.TestB)
test_y0 (sample2.sample21.sampletests)
test_y0 (sample2.sampletests.test_1.TestA)
test_y1 (sample2.sampletests.test_1.TestB)
test_y0 (sample2.sampletests.test_1)
test_y0 (sample2.sampletests.testone.TestA)
test_y1 (sample2.sampletests.testone.TestB)
test_y0 (sample2.sampletests.testone)
test_y0 (sample3.sampletests.TestA)
test_y1 (sample3.sampletests.TestB)
test_y0 (sample3.sampletests)
test_y0 (sampletests.test1.TestA)
test_y1 (sampletests.test1.TestB)
test_y0 (sampletests.test1)
test_y0 (sampletests.test_one.TestA)
test_y1 (sampletests.test_one.TestB)
test_y0 (sampletests.test_one)
Ran 36 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


We get run 36 tests. If we specify a level of 2, we get some
additional tests:

>>> sys.argv = 'test -u -vv -a 2 -t test_y1 -t test_y0'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 2
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_y0 (sampletestsf.TestA)
test_y0 (sampletestsf.TestA2)
test_y1 (sampletestsf.TestB)
test_y0 (sampletestsf)
test_y0 (sample1.sampletestsf.TestA)
test_y1 (sample1.sampletestsf.TestB)
test_y0 (sample1.sampletestsf)
test_y0 (sample1.sample11.sampletests.TestA)
test_y1 (sample1.sample11.sampletests.TestB)
test_y1 (sample1.sample11.sampletests.TestB2)
test_y0 (sample1.sample11.sampletests)
test_y0 (sample1.sample13.sampletests.TestA)
test_y1 (sample1.sample13.sampletests.TestB)
test_y0 (sample1.sample13.sampletests)
test_y0 (sample1.sampletests.test1.TestA)
test_y1 (sample1.sampletests.test1.TestB)
test_y0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test_one.TestA)
test_y1 (sample1.sampletests.test_one.TestB)
test_y0 (sample1.sampletests.test_one)
test_y0 (sample2.sample21.sampletests.TestA)
test_y1 (sample2.sample21.sampletests.TestB)
test_y0 (sample2.sample21.sampletests)
test_y0 (sample2.sampletests.test_1.TestA)
test_y1 (sample2.sampletests.test_1.TestB)
test_y0 (sample2.sampletests.test_1)
test_y0 (sample2.sampletests.testone.TestA)
test_y1 (sample2.sampletests.testone.TestB)
test_y0 (sample2.sampletests.testone)
test_y0 (sample3.sampletests.TestA)
test_y1 (sample3.sampletests.TestB)
test_y0 (sample3.sampletests)
test_y0 (sampletests.test1.TestA)
test_y1 (sampletests.test1.TestB)
test_y0 (sampletests.test1)
test_y0 (sampletests.test_one.TestA)
test_y1 (sampletests.test_one.TestB)
test_y0 (sampletests.test_one)
Ran 38 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


We can use the --all option to run tests at all levels:

>>> sys.argv = 'test -u -vv --all -t test_y1 -t test_y0'.split()
>>> testrunner.run_internal(defaults)
Running tests at all levels
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test_y0 (sampletestsf.TestA)
test_y0 (sampletestsf.TestA2)
test_y1 (sampletestsf.TestB)
test_y0 (sampletestsf)
test_y0 (sample1.sampletestsf.TestA)
test_y1 (sample1.sampletestsf.TestB)
test_y0 (sample1.sampletestsf)
test_y0 (sample1.sample11.sampletests.TestA)
test_y0 (sample1.sample11.sampletests.TestA3)
test_y1 (sample1.sample11.sampletests.TestB)
test_y1 (sample1.sample11.sampletests.TestB2)
test_y0 (sample1.sample11.sampletests)
test_y0 (sample1.sample13.sampletests.TestA)
test_y1 (sample1.sample13.sampletests.TestB)
test_y0 (sample1.sample13.sampletests)
test_y0 (sample1.sampletests.test1.TestA)
test_y1 (sample1.sampletests.test1.TestB)
test_y0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test_one.TestA)
test_y1 (sample1.sampletests.test_one.TestB)
test_y0 (sample1.sampletests.test_one)
test_y0 (sample2.sample21.sampletests.TestA)
test_y1 (sample2.sample21.sampletests.TestB)
test_y0 (sample2.sample21.sampletests)
test_y0 (sample2.sampletests.test_1.TestA)
test_y1 (sample2.sampletests.test_1.TestB)
test_y0 (sample2.sampletests.test_1)
test_y0 (sample2.sampletests.testone.TestA)
test_y1 (sample2.sampletests.testone.TestB)
test_y0 (sample2.sampletests.testone)
test_y0 (sample3.sampletests.TestA)
test_y1 (sample3.sampletests.TestB)
test_y0 (sample3.sampletests)
test_y0 (sampletests.test1.TestA)
test_y1 (sampletests.test1.TestB)
test_y0 (sampletests.test1)
test_y0 (sampletests.test_one.TestA)
test_y1 (sampletests.test_one.TestB)
test_y0 (sampletests.test_one)
Ran 39 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


Listing Selected Tests
----------------------

When you're trying to figure out why the test you want is not matched by the
pattern you specified, it is convenient to see which tests match your
specifications.

>>> sys.argv = 'test --all -m sample1 -t test_y0 --list-tests'.split()
>>> testrunner.run_internal(defaults)
Listing samplelayers.Layer11 tests:
test_y0 (sample1.sampletests.test11.TestA)
test_y0 (sample1.sampletests.test11)
Listing samplelayers.Layer111 tests:
test_y0 (sample1.sampletests.test111.TestA)
test_y0 (sample1.sampletests.test111)
Listing samplelayers.Layer112 tests:
test_y0 (sample1.sampletests.test112.TestA)
test_y0 (sample1.sampletests.test112)
Listing samplelayers.Layer12 tests:
test_y0 (sample1.sampletests.test12.TestA)
test_y0 (sample1.sampletests.test12)
Listing samplelayers.Layer121 tests:
test_y0 (sample1.sampletests.test121.TestA)
test_y0 (sample1.sampletests.test121)
Listing samplelayers.Layer122 tests:
test_y0 (sample1.sampletests.test122.TestA)
test_y0 (sample1.sampletests.test122)
Listing zope.testing.testrunner.layer.UnitTests tests:
test_y0 (sample1.sampletestsf.TestA)
test_y0 (sample1.sampletestsf)
test_y0 (sample1.sample11.sampletests.TestA)
test_y0 (sample1.sample11.sampletests.TestA3)
test_y0 (sample1.sample11.sampletests)
test_y0 (sample1.sample13.sampletests.TestA)
test_y0 (sample1.sample13.sampletests)
test_y0 (sample1.sampletests.test1.TestA)
test_y0 (sample1.sampletests.test1)
test_y0 (sample1.sampletests.test_one.TestA)
test_y0 (sample1.sampletests.test_one)
False

Test Progress
=============

If the --progress (-p) option is used, progress information is printed and
a carriage return (rather than a new-line) is printed between
detail lines. Let's look at the effect of --progress (-p) at different
levels of verbosity.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = 'test --layer 122 -p'.split()
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
1/34 (2.9%)##r##
##r##
2/34 (5.9%)##r##
##r##
3/34 (8.8%)##r##
##r##
4/34 (11.8%)##r##
##r##
5/34 (14.7%)##r##
##r##
6/34 (17.6%)##r##
##r##
7/34 (20.6%)##r##
##r##
8/34 (23.5%)##r##
##r##
9/34 (26.5%)##r##
##r##
10/34 (29.4%)##r##
##r##
11/34 (32.4%)##r##
##r##
12/34 (35.3%)##r##
##r##
17/34 (50.0%)##r##
##r##
18/34 (52.9%)##r##
##r##
19/34 (55.9%)##r##
##r##
20/34 (58.8%)##r##
##r##
21/34 (61.8%)##r##
##r##
22/34 (64.7%)##r##
##r##
23/34 (67.6%)##r##
##r##
24/34 (70.6%)##r##
##r##
25/34 (73.5%)##r##
##r##
26/34 (76.5%)##r##
##r##
27/34 (79.4%)##r##
##r##
28/34 (82.4%)##r##
##r##
29/34 (85.3%)##r##
##r##
34/34 (100.0%)##r##
##r##
Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

(Note that, in the examples above and below, we show "##r##" followed by
new lines where carriage returns would appear in actual output.)

Using a single level of verbosity causes test descriptions to be
output, but only if they fit in the terminal width. The default
width, when the terminal width can't be determined, is 80:

>>> sys.argv = 'test --layer 122 -pv'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
1/34 (2.9%) test_x1 (sample1.sampletests.test122.TestA)##r##
##r##
2/34 (5.9%) test_y0 (sample1.sampletests.test122.TestA)##r##
##r##
3/34 (8.8%) test_z0 (sample1.sampletests.test122.TestA)##r##
##r##
4/34 (11.8%) test_x0 (sample1.sampletests.test122.TestB)##r##
##r##
5/34 (14.7%) test_y1 (sample1.sampletests.test122.TestB)##r##
##r##
6/34 (17.6%) test_z0 (sample1.sampletests.test122.TestB)##r##
##r##
7/34 (20.6%) test_1 (sample1.sampletests.test122.TestNotMuch)##r##
##r##
8/34 (23.5%) test_2 (sample1.sampletests.test122.TestNotMuch)##r##
##r##
9/34 (26.5%) test_3 (sample1.sampletests.test122.TestNotMuch)##r##
##r##
10/34 (29.4%) test_x0 (sample1.sampletests.test122)##r##
##r##
11/34 (32.4%) test_y0 (sample1.sampletests.test122)##r##
##r##
12/34 (35.3%) test_z1 (sample1.sampletests.test122)##r##
##r##
17/34 (50.0%) ... /testrunner-
ex/sample1/sampletests/../../sampletestsl.txt##r##

##r##
18/34 (52.9%) test_x1 (sampletests.test122.TestA)##r##
##r##
19/34 (55.9%) test_y0 (sampletests.test122.TestA)##r##
##r##
20/34 (58.8%) test_z0 (sampletests.test122.TestA)##r##
##r##
21/34 (61.8%) test_x0 (sampletests.test122.TestB)##r##
##r##
22/34 (64.7%) test_y1 (sampletests.test122.TestB)##r##
##r##
23/34 (67.6%) test_z0 (sampletests.test122.TestB)##r##
##r##
24/34 (70.6%) test_1 (sampletests.test122.TestNotMuch)##r##
##r##
25/34 (73.5%) test_2 (sampletests.test122.TestNotMuch)##r##
##r##
26/34 (76.5%) test_3 (sampletests.test122.TestNotMuch)##r##
##r##
27/34 (79.4%) test_x0 (sampletests.test122)##r##
##r##
28/34 (82.4%) test_y0 (sampletests.test122)##r##
##r##
29/34 (85.3%) test_z1 (sampletests.test122)##r##
##r##
34/34 (100.0%) ... pe/testing/testrunner-
ex/sampletests/../sampletestsl.txt##r##

##r##
Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

The terminal width is determined using the curses module. To see
that, we'll provide a fake curses module:

>>> class FakeCurses:
... def setupterm(self):
... pass
... def tigetnum(self, ignored):
... return 60
>>> old_curses = sys.modules.get('curses')
>>> sys.modules['curses'] = FakeCurses()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
1/34 (2.9%) test_x1 (sample1.sampletests.test122.TestA)##r##
##r##
2/34 (5.9%) test_y0 (sample1.sampletests.test122.TestA)##r##
##r##
3/34 (8.8%) test_z0 (sample1.sampletests.test122.TestA)##r##
##r##
4/34 (11.8%) test_x0 (...le1.sampletests.test122.TestB)##r##
##r##
5/34 (14.7%) test_y1 (...le1.sampletests.test122.TestB)##r##
##r##
6/34 (17.6%) test_z0 (...le1.sampletests.test122.TestB)##r##
##r##
7/34 (20.6%) test_1 (...ampletests.test122.TestNotMuch)##r##
##r##
8/34 (23.5%) test_2 (...ampletests.test122.TestNotMuch)##r##
##r##
9/34 (26.5%) test_3 (...ampletests.test122.TestNotMuch)##r##
##r##
10/34 (29.4%) test_x0 (sample1.sampletests.test122)##r##
##r##
11/34 (32.4%) test_y0 (sample1.sampletests.test122)##r##
##r##
12/34 (35.3%) test_z1 (sample1.sampletests.test122)##r##
##r##
17/34 (50.0%) ... e1/sampletests/../../sampletestsl.txt##r##
##r##
18/34 (52.9%) test_x1 (sampletests.test122.TestA)##r##
##r##
19/34 (55.9%) test_y0 (sampletests.test122.TestA)##r##
##r##
20/34 (58.8%) test_z0 (sampletests.test122.TestA)##r##
##r##
21/34 (61.8%) test_x0 (sampletests.test122.TestB)##r##
##r##
22/34 (64.7%) test_y1 (sampletests.test122.TestB)##r##
##r##
23/34 (67.6%) test_z0 (sampletests.test122.TestB)##r##
##r##
24/34 (70.6%) test_1 (sampletests.test122.TestNotMuch)##r##
##r##
25/34 (73.5%) test_2 (sampletests.test122.TestNotMuch)##r##
##r##
26/34 (76.5%) test_3 (sampletests.test122.TestNotMuch)##r##
##r##
27/34 (79.4%) test_x0 (sampletests.test122)##r##
##r##
28/34 (82.4%) test_y0 (sampletests.test122)##r##
##r##
29/34 (85.3%) test_z1 (sampletests.test122)##r##
##r##
34/34 (100.0%) ... r-ex/sampletests/../sampletestsl.txt##r##
##r##
Ran 34 tests with 0 failures and 0 errors in 0.008 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

>>> sys.modules['curses'] = old_curses

If a second or third level of verbosity are added, we get additional
information.

>>> sys.argv = 'test --layer 122 -pvv -t !txt'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
1/24 (4.2%) test_x1 (sample1.sampletests.test122.TestA)##r##
##r##
2/24 (8.3%) test_y0 (sample1.sampletests.test122.TestA)##r##
##r##
3/24 (12.5%) test_z0 (sample1.sampletests.test122.TestA)##r##
##r##
4/24 (16.7%) test_x0 (sample1.sampletests.test122.TestB)##r##
##r##
5/24 (20.8%) test_y1 (sample1.sampletests.test122.TestB)##r##
##r##
6/24 (25.0%) test_z0 (sample1.sampletests.test122.TestB)##r##
##r##
7/24 (29.2%) test_1 (sample1.sampletests.test122.TestNotMuch)##r##
##r##
8/24 (33.3%) test_2 (sample1.sampletests.test122.TestNotMuch)##r##
##r##
9/24 (37.5%) test_3 (sample1.sampletests.test122.TestNotMuch)##r##
##r##
10/24 (41.7%) test_x0 (sample1.sampletests.test122)##r##
##r##
11/24 (45.8%) test_y0 (sample1.sampletests.test122)##r##
##r##
12/24 (50.0%) test_z1 (sample1.sampletests.test122)##r##
##r##
13/24 (54.2%) test_x1 (sampletests.test122.TestA)##r##
##r##
14/24 (58.3%) test_y0 (sampletests.test122.TestA)##r##
##r##
15/24 (62.5%) test_z0 (sampletests.test122.TestA)##r##
##r##
16/24 (66.7%) test_x0 (sampletests.test122.TestB)##r##
##r##
17/24 (70.8%) test_y1 (sampletests.test122.TestB)##r##
##r##
18/24 (75.0%) test_z0 (sampletests.test122.TestB)##r##
##r##
19/24 (79.2%) test_1 (sampletests.test122.TestNotMuch)##r##
##r##
20/24 (83.3%) test_2 (sampletests.test122.TestNotMuch)##r##
##r##
21/24 (87.5%) test_3 (sampletests.test122.TestNotMuch)##r##
##r##
22/24 (91.7%) test_x0 (sampletests.test122)##r##
##r##
23/24 (95.8%) test_y0 (sampletests.test122)##r##
##r##
24/24 (100.0%) test_z1 (sampletests.test122)##r##
##r##
Ran 24 tests with 0 failures and 0 errors in 0.006 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

Note that, in this example, we used a test-selection pattern starting
with '!' to exclude tests containing the string "txt".

>>> sys.argv = 'test --layer 122 -pvvv -t!(txt|NotMuch)'.split()
>>> testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer122 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Running:
1/18 (5.6%) test_x1 (sample1.sampletests.test122.TestA) (0.000 s)##r##
##r##
2/18 (11.1%) test_y0 (sample1.sampletests.test122.TestA) (0.000 s)##r##
##r##
3/18 (16.7%) test_z0 (sample1.sampletests.test122.TestA) (0.000 s)##r##
##r##
4/18 (22.2%) test_x0 (sample1.sampletests.test122.TestB) (0.000 s)##r##
##r##
5/18 (27.8%) test_y1 (sample1.sampletests.test122.TestB) (0.000 s)##r##
##r##
6/18 (33.3%) test_z0 (sample1.sampletests.test122.TestB) (0.000 s)##r##
##r##
7/18 (38.9%) test_x0 (sample1.sampletests.test122) (0.001 s)##r##
##r##
8/18 (44.4%) test_y0 (sample1.sampletests.test122) (0.001 s)##r##
##r##
9/18 (50.0%) test_z1 (sample1.sampletests.test122) (0.001 s)##r##
##r##
10/18 (55.6%) test_x1 (sampletests.test122.TestA) (0.000 s)##r##
##r##
11/18 (61.1%) test_y0 (sampletests.test122.TestA) (0.000 s)##r##
##r##
12/18 (66.7%) test_z0 (sampletests.test122.TestA) (0.000 s)##r##
##r##
13/18 (72.2%) test_x0 (sampletests.test122.TestB) (0.000 s)##r##
##r##
14/18 (77.8%) test_y1 (sampletests.test122.TestB) (0.000 s)##r##
##r##
15/18 (83.3%) test_z0 (sampletests.test122.TestB) (0.000 s)##r##
##r##
16/18 (88.9%) test_x0 (sampletests.test122) (0.001 s)##r##
##r##
17/18 (94.4%) test_y0 (sampletests.test122) (0.001 s)##r##
##r##
18/18 (100.0%) test_z1 (sampletests.test122) (0.001 s)##r##
##r##
Ran 18 tests with 0 failures and 0 errors in 0.006 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
False

In this example, we also excluded tests with "NotMuch" in their names.

Unfortunately, the time data above doesn't buy us much because, in
practice, the line is cleared before there is time to see the
times. :/


Autodetecting progress
----------------------

The --auto-progress option will determine if stdout is a terminal, and only
enable
progress output if so.

Let's pretend we have a terminal

>>> class Terminal(object):
... def __init__(self, stream):
... self._stream = stream
... def __getattr__(self, attr):
... return getattr(self._stream, attr)
... def isatty(self):
... return True
>>> real_stdout = sys.stdout
>>> sys.stdout = Terminal(sys.stdout)

>>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
>>> testrunner.run_internal(defaults)
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
1/6 (16.7%)##r##
##r##
2/6 (33.3%)##r##
##r##
3/6 (50.0%)##r##
##r##
4/6 (66.7%)##r##
##r##
5/6 (83.3%)##r##
##r##
6/6 (100.0%)##r##
##r##
Ran 6 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


Let's stop pretending

>>> sys.stdout = real_stdout

>>> sys.argv = 'test -u -t test_one.TestNotMuch --auto-progress'.split()
>>> testrunner.run_internal(defaults)
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 6 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


Disabling progress indication
-----------------------------

If -p or --progress have been previously provided on the command line (perhaps
by a
wrapper script) but you do not desire progress indication, you can switch it off
with
--no-progress:

>>> sys.argv = 'test -u -t test_one.TestNotMuch -p --no-progress'.split()
>>> testrunner.run_internal(defaults)
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Ran 6 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False

Debugging
=========

The testrunner module supports post-mortem debugging and debugging
using `pdb.set_trace`. Let's look first at using `pdb.set_trace`.
To demonstrate this, we'll provide input via helper Input objects:

>>> class Input:
... def __init__(self, src):
... self.lines = src.split('\n')
... def readline(self):
... line = self.lines.pop(0)
... print line
... return line+'\n'

If a test or code called by a test calls pdb.set_trace, then the
runner will enter pdb at that point:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> from zope.testing import testrunner
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> real_stdin = sys.stdin
>>> if sys.version_info[:2] == (2, 3):
... sys.stdin = Input('n\np x\nc')
... else:
... sys.stdin = Input('p x\nc')

>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
... ' -t set_trace1').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +ELLIPSIS
Running zope.testing.testrunner.layer.UnitTests tests:
...
> testrunner-ex/sample3/sampletests_d.py(27)test_set_trace1()
-> y = x
(Pdb) p x
1
(Pdb) c
Ran 1 tests with 0 failures and 0 errors in 0.001 seconds.
...
False

Note that, prior to Python 2.4, calling pdb.set_trace caused pdb to
break in the pdb.set_trace function. It was necessary to use 'next'
or 'up' to get to the application code that called pdb.set_trace. In
Python 2.4, pdb.set_trace causes pdb to stop right after the call to
pdb.set_trace.

You can also do post-mortem debugging, using the --post-mortem (-D)
option:

>>> sys.stdin = Input('p x\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
... ' -t post_mortem1 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
Running zope.testing.testrunner.layer.UnitTests tests:
...
Error in test test_post_mortem1 (sample3.sampletests_d.TestSomething)
Traceback (most recent call last):
File "testrunner-ex/sample3/sampletests_d.py",
line 34, in test_post_mortem1
raise ValueError
ValueError
<BLANKLINE>
exceptions.ValueError:
<BLANKLINE>
> testrunner-ex/sample3/sampletests_d.py(34)test_post_mortem1()
-> raise ValueError
(Pdb) p x
1
(Pdb) c
True

Note that the test runner exits after post-mortem debugging.

In the example above, we debugged an error. Failures are actually
converted to errors and can be debugged the same way:

>>> sys.stdin = Input('up\np x\np y\nc')
>>> sys.argv = ('test -ssample3 --tests-pattern ^sampletests_d$'
... ' -t post_mortem_failure1 -D').split()
>>> try: testrunner.run_internal(defaults)
... finally: sys.stdin = real_stdin
... # doctest: +NORMALIZE_WHITESPACE +REPORT_NDIFF
Running zope.testing.testrunner.layer.UnitTests tests:
...
Error in test test_post_mortem_failure1
(sample3.sampletests_d.TestSomething)
Traceback (most recent call last):
File ".../unittest.py", line 252, in debug
getattr(self, self.__testMethodName)()
File "testrunner-ex/sample3/sampletests_d.py",
line 42, in test_post_mortem_failure1
self.assertEqual(x, y)
File ".../unittest.py", line 302, in failUnlessEqual
raise self.failureException, \
AssertionError: 1 != 2
<BLANKLINE>
exceptions.AssertionError:
1 != 2
> .../unittest.py(302)failUnlessEqual()
-> ...
(Pdb) up
> testrunner-ex/sample3/sampletests_d.py(42)test_post_mortem_failure1()
-> self.assertEqual(x, y)
(Pdb) p x
1
(Pdb) p y
2
(Pdb) c
True

Layers that can't be torn down
==============================

A layer can have a tearDown method that raises NotImplementedError.
If this is the case and there are no remaining tests to run, the test
runner will just note that the tear down couldn't be done:

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> from zope.testing import testrunner
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = 'test -ssample2 --tests-pattern sampletests_ntd$'.split()
>>> testrunner.run_internal(defaults)
Running sample2.sampletests_ntd.Layer tests:
Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Tearing down left over layers:
Tear down sample2.sampletests_ntd.Layer ... not supported
False

If the tearDown method raises NotImplementedError and there are remaining
layers to run, the test runner will restart itself as a new process,
resuming tests where it left off:

>>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$']
>>> testrunner.run_internal(defaults)
Running sample1.sampletests_ntd.Layer tests:
Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Running sample2.sampletests_ntd.Layer tests:
Tear down sample1.sampletests_ntd.Layer ... not supported
Running in a subprocess.
Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tear down sample2.sampletests_ntd.Layer ... not supported
Running sample3.sampletests_ntd.Layer tests:
Running in a subprocess.
Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
raise TypeError("Can we see errors")
TypeError: Can we see errors
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
raise TypeError("I hope so")
TypeError: I hope so
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test test_fail1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
self.assertEqual(1, 2)
AssertionError: 1 != 2
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Failure in test test_fail2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
self.assertEqual(1, 3)
AssertionError: 1 != 3
<BLANKLINE>
Ran 6 tests with 2 failures and 2 errors in N.NNN seconds.
Tear down sample3.sampletests_ntd.Layer ... not supported
Total: 8 tests, 2 failures, 2 errors in N.NNN seconds.
True

in the example above, some of the tests run as a subprocess had errors
and failures. They were displayed as usual and the failure and error
statistice were updated as usual.

Note that debugging doesn't work when running tests in a subprocess:

>>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntd$',
... '-D', ]
>>> testrunner.run_internal(defaults)
Running sample1.sampletests_ntd.Layer tests:
Set up sample1.sampletests_ntd.Layer in N.NNN seconds.
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Running sample2.sampletests_ntd.Layer tests:
Tear down sample1.sampletests_ntd.Layer ... not supported
Running in a subprocess.
Set up sample2.sampletests_ntd.Layer in N.NNN seconds.
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tear down sample2.sampletests_ntd.Layer ... not supported
Running sample3.sampletests_ntd.Layer tests:
Running in a subprocess.
Set up sample3.sampletests_ntd.Layer in N.NNN seconds.
<BLANKLINE>
<BLANKLINE>
Error in test test_error1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error1
raise TypeError("Can we see errors")
TypeError: Can we see errors
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_error2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_error2
raise TypeError("I hope so")
TypeError: I hope so
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_fail1 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail1
self.assertEqual(1, 2)
AssertionError: 1 != 2
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
<BLANKLINE>
<BLANKLINE>
Error in test test_fail2 (sample3.sampletests_ntd.TestSomething)
Traceback (most recent call last):
testrunner-ex/sample3/sampletests_ntd.py", Line NNN, in test_fail2
self.assertEqual(1, 3)
AssertionError: 1 != 3
<BLANKLINE>
<BLANKLINE>
**********************************************************************
Can't post-mortem debug when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
Ran 6 tests with 0 failures and 4 errors in N.NNN seconds.
Tear down sample3.sampletests_ntd.Layer ... not supported
Total: 8 tests, 0 failures, 4 errors in N.NNN seconds.
True

Similarly, pdb.set_trace doesn't work when running tests in a layer
that is run as a subprocess:

>>> sys.argv = [testrunner_script, '--tests-pattern', 'sampletests_ntds']
>>> testrunner.run_internal(defaults)
Running sample1.sampletests_ntds.Layer tests:
Set up sample1.sampletests_ntds.Layer in 0.000 seconds.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Running sample2.sampletests_ntds.Layer tests:
Tear down sample1.sampletests_ntds.Layer ... not supported
Running in a subprocess.
Set up sample2.sampletests_ntds.Layer in 0.000 seconds.
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(37)test_something()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(40)test_something2()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(43)test_something3()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(46)test_something4()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> testrunner-ex/sample2/sampletests_ntds.py(52)f()->None
-> import pdb; pdb.set_trace()
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> doctest.py(351)set_trace()->None
-> pdb.Pdb.set_trace(self)
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
--Return--
> doctest.py(351)set_trace()->None
-> pdb.Pdb.set_trace(self)
(Pdb) c
<BLANKLINE>
**********************************************************************
Can't use pdb.set_trace when running a layer as a subprocess!
**********************************************************************
<BLANKLINE>
Ran 7 tests with 0 failures and 0 errors in 0.008 seconds.
Tear down sample2.sampletests_ntds.Layer ... not supported
Total: 8 tests, 0 failures, 0 errors in N.NNN seconds.
False

If you want to use pdb from a test in a layer that is run as a
subprocess, then rerun the test runner selecting *just* that layer so
that it's not run as a subprocess.


If a test is run in a subprocess and it generates output on stderr (as
stderrtest does), the output is ignored (but it doesn't cause a SubprocessError
like it once did).

>>> sys.argv = [testrunner_script, '-s', 'sample2', '--tests-pattern',
... '(sampletests_ntd$|stderrtest)']
>>> testrunner.run_internal(defaults)
Running sample2.sampletests_ntd.Layer tests:
Set up sample2.sampletests_ntd.Layer in 0.000 seconds.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Running sample2.stderrtest.Layer tests:
Tear down sample2.sampletests_ntd.Layer ... not supported
Running in a subprocess.
Set up sample2.stderrtest.Layer in 0.000 seconds.
Ran 1 tests with 0 failures and 0 errors in 0.002 seconds.
Tear down sample2.stderrtest.Layer in 0.000 seconds.
Total: 2 tests, 0 failures, 0 errors in 0.197 seconds.
False

Code Coverage
=============

If the --coverage option is used, test coverage reports will be generated. The
directory name given as the parameter will be used to hold the reports.


>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = 'test --coverage=coverage_dir'.split()

>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Ran 9 tests with 0 failures and 0 errors in 0.000 seconds.
Running samplelayers.Layer11 tests:
Set up samplelayers.Layer11 in 0.000 seconds.
Ran 34 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in 0.000 seconds.
Set up samplelayers.Layer111 in 0.000 seconds.
Ran 34 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer112 tests:
Tear down samplelayers.Layer111 in 0.000 seconds.
Set up samplelayers.Layer112 in 0.000 seconds.
Ran 34 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer12 tests:
Tear down samplelayers.Layer112 in 0.000 seconds.
Tear down samplelayers.Layerx in 0.000 seconds.
Tear down samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Ran 34 tests with 0 failures and 0 errors in 0.140 seconds.
Running samplelayers.Layer121 tests:
Set up samplelayers.Layer121 in 0.000 seconds.
Ran 34 tests with 0 failures and 0 errors in 0.125 seconds.
Running samplelayers.Layer122 tests:
Tear down samplelayers.Layer121 in 0.000 seconds.
Set up samplelayers.Layer122 in 0.000 seconds.
Ran 34 tests with 0 failures and 0 errors in 0.125 seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Tear down samplelayers.Layer122 in 0.000 seconds.
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds.
Ran 192 tests with 0 failures and 0 errors in 0.687 seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds.
lines cov% module (path)
...
48 100% sampletests.test1 (testrunner-ex/sampletests/test1.py)
74 100% sampletests.test11 (testrunner-ex/sampletests/test11.py)
74 100% sampletests.test111 (testrunner-ex/sampletests/test111.py)
76 100% sampletests.test112 (testrunner-ex/sampletests/test112.py)
74 100% sampletests.test12 (testrunner-ex/sampletests/test12.py)
74 100% sampletests.test121 (testrunner-ex/sampletests/test121.py)
74 100% sampletests.test122 (testrunner-ex/sampletests/test122.py)
48 100% sampletests.test_one (testrunner-
ex/sampletests/test_one.py)
112 95% sampletestsf (testrunner-ex/sampletestsf.py)
Total: 405 tests, 0 failures, 0 errors in 0.630 seconds.
False

The directory specified with the --coverage option will have been created and
will hold the coverage reports.

>>> os.path.exists('coverage_dir')
True
>>> os.listdir('coverage_dir')
[...]

(We should clean up after ourselves.)

>>> import shutil
>>> shutil.rmtree('coverage_dir')


Ignoring Tests
--------------

The ``trace`` module supports ignoring directories and modules based the test
selection. Only directories selected for testing should report coverage. The
test runner provides a custom implementation of the relevant API.

The ``TestIgnore`` class, the class managing the ignoring, is initialized by
passing the command line options. It uses the options to determine the
directories that should be covered.

>>> class FauxOptions(object):
... package = None
... test_path = [('/myproject/src/blah/foo', ''),
... ('/myproject/src/blah/bar', '')]
>>> from zope.testing.testrunner import coverage
>>> from zope.testing.testrunner.find import test_dirs
>>> ignore = coverage.TestIgnore(test_dirs(FauxOptions(), {}))
>>> ignore._test_dirs
['/myproject/src/blah/foo/', '/myproject/src/blah/bar/']

We can now ask whether a particular module should be ignored:

>>> ignore.names('/myproject/src/blah/foo/baz.py', 'baz')
False
>>> ignore.names('/myproject/src/blah/bar/mine.py', 'mine')
False
>>> ignore.names('/myproject/src/blah/foo/__init__.py', 'foo')
False
>>> ignore.names('/myproject/src/blah/hello.py', 'hello')
True

When running the test runner, modules are sometimes created from text
strings. Those should *always* be ignored:

>>> ignore.names('/myproject/src/blah/hello.txt', '<string>')
True

To make this check fast, the class implements a cache. In an early
implementation, the result was cached by the module name, which was a problem,
since a lot of modules carry the same name (not the Python dotted name
here!). So just because a module has the same name in an ignored and tested
directory, does not mean it is always ignored:

>>> ignore.names('/myproject/src/blah/module.py', 'module')
True
>>> ignore.names('/myproject/src/blah/foo/module.py', 'module')
False

Profiling
=========
The testrunner supports hotshot and cProfile profilers. Hotshot profiler
support does not work with python2.6

>>> import os.path, sys
>>> profiler = '--profile=hotshot'
>>> if sys.hexversion >= 0x02060000:
... profiler = '--profile=cProfile'

The testrunner includes the ability to profile the test execution with hotshot
via the --profile option, if it a python <= 2.6

>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> sys.path.append(directory_with_tests)

>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = [testrunner_script, profiler]

When the tests are run, we get profiling output.

>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer1 tests:
...
Running samplelayers.Layer11 tests:
...
Running zope.testing.testrunner.layer.UnitTests tests:
...
ncalls tottime percall cumtime percall filename:lineno(function)
...
Total: ... tests, 0 failures, 0 errors in ... seconds.
False

Profiling also works across layers.

>>> sys.argv = [testrunner_script, '-ssample2', profiler,
... '--tests-pattern', 'sampletests_ntd']
>>> testrunner.run_internal(defaults)
Running...
Tear down ... not supported...
ncalls tottime percall cumtime percall filename:lineno(function)...

The testrunner creates temnporary files containing hotshot profiler
data:

>>> import glob
>>> files = list(glob.glob('tests_profile.*.prof'))
>>> files.sort()
>>> files
['tests_profile.cZj2jt.prof', 'tests_profile.yHD-so.prof']

It deletes these when rerun. We'll delete these ourselves:

>>> import os
>>> for f in files:
... os.unlink(f)

Running Without Source Code
===========================

The ``--usecompiled`` option allows running tests in a tree without .py
source code, provided compiled .pyc or .pyo files exist (without
``--usecompiled``, .py files are necessary).

We have a very simple directory tree, under ``usecompiled/``, to test
this. Because we're going to delete its .py files, we want to work
in a copy of that:

>>> import os.path, shutil, sys, tempfile
>>> directory_with_tests = tempfile.mkdtemp()

>>> NEWNAME = "unlikely_package_name"
>>> src = os.path.join(this_directory, 'testrunner-ex', 'usecompiled')
>>> os.path.isdir(src)
True
>>> dst = os.path.join(directory_with_tests, NEWNAME)
>>> os.path.isdir(dst)
False

Have to use our own copying code, to avoid copying read-only SVN files that
can't be deleted later.

>>> n = len(src) + 1
>>> for root, dirs, files in os.walk(src):
... dirs[:] = [d for d in dirs if d == "package"] # prune cruft
... os.mkdir(os.path.join(dst, root[n:]))
... for f in files:
... shutil.copy(os.path.join(root, f),
... os.path.join(dst, root[n:], f))

Now run the tests in the copy:

>>> from zope.testing import testrunner

>>> mydefaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^compiletest$',
... '--package', NEWNAME,
... '-vv',
... ]
>>> sys.argv = ['test']
>>> testrunner.run_internal(mydefaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test1 (unlikely_package_name.compiletest.Test)
test2 (unlikely_package_name.compiletest.Test)
test1 (unlikely_package_name.package.compiletest.Test)
test2 (unlikely_package_name.package.compiletest.Test)
Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


If we delete the source files, it's normally a disaster: the test runner
doesn't believe any test files, or even packages, exist. Note that we pass
``--keepbytecode`` this time, because otherwise the test runner would
delete the compiled Python files too:

>>> for root, dirs, files in os.walk(dst):
... for f in files:
... if f.endswith(".py"):
... os.remove(os.path.join(root, f))
>>> testrunner.run_internal(mydefaults, ["test", "--keepbytecode"])
Running tests at level 1
Total: 0 tests, 0 failures, 0 errors in N.NNN seconds.
False

Finally, passing ``--usecompiled`` asks the test runner to treat .pyc
and .pyo files as adequate replacements for .py files. Note that the
output is the same as when running with .py source above. The absence
of "removing stale bytecode ..." messages shows that ``--usecompiled``
also implies ``--keepbytecode``:

>>> testrunner.run_internal(mydefaults, ["test", "--usecompiled"])
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
test1 (unlikely_package_name.compiletest.Test)
test2 (unlikely_package_name.compiletest.Test)
test1 (unlikely_package_name.package.compiletest.Test)
test2 (unlikely_package_name.package.compiletest.Test)
Ran 4 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
False


Remove the temporary directory:

>>> shutil.rmtree(directory_with_tests)

Repeating Tests
===============

The --repeat option can be used to repeat tests some number of times.
Repeating tests is useful to help make sure that tests clean up after
themselves.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> sys.argv = 'test --layer 112 --layer UnitTests --repeat 3'.split()
>>> from zope.testing import testrunner
>>> testrunner.run_internal(defaults)
Running samplelayers.Layer112 tests:
Set up samplelayers.Layerx in 0.000 seconds.
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer112 in 0.000 seconds.
Iteration 1
Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
Iteration 2
Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
Iteration 3
Ran 34 tests with 0 failures and 0 errors in 0.010 seconds.
Running zope.testing.testrunner.layer.UnitTests tests:
Tear down samplelayers.Layer112 in N.NNN seconds.
Tear down samplelayers.Layerx in N.NNN seconds.
Tear down samplelayers.Layer11 in N.NNN seconds.
Tear down samplelayers.Layer1 in N.NNN seconds.
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Iteration 1
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Iteration 2
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Iteration 3
Ran 192 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Total: 226 tests, 0 failures, 0 errors in N.NNN seconds.
False

The tests are repeated by layer. Layers are set up and torn down only
once.

Garbage Collection Control
==========================

When having problems that seem to be caused my memory-management
errors, it can be helpful to adjust Python's cyclic garbage collector
or to get garbage colection statistics. The --gc option can be used
for this purpose.

If you think you are getting a test failure due to a garbage
collection problem, you can try disabling garbage collection by
using the --gc option with a value of zero.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = ['--path', directory_with_tests]

>>> from zope.testing import testrunner

>>> sys.argv = 'test --tests-pattern ^gc0$ --gc 0 -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Cyclic garbage collection is disabled.
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
make_sure_gc_is_disabled (gc0)
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.


Alternatively, if you think you are having a garbage collection
related problem, you can cause garbage collection to happen more often
by providing a low threshold:

>>> sys.argv = 'test --tests-pattern ^gc1$ --gc 1 -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Cyclic garbage collection threshold set to: (1,)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
make_sure_gc_threshold_is_one (gc1)
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.

You can specify up to 3 --gc options to set each of the 3 gc threshold
values:


>>> sys.argv = ('test --tests-pattern ^gcset$ --gc 701 --gc 11 --gc 9 -vv'
... .split())
>>> _ = testrunner.run_internal(defaults)
Cyclic garbage collection threshold set to: (701, 11, 9)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
make_sure_gc_threshold_is_701_11_9 (gcset)
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.

Specifying more than 3 --gc options is not allowed:


>>> from StringIO import StringIO
>>> out = StringIO()
>>> stdout = sys.stdout
>>> sys.stdout = out

>>> sys.argv = ('test --tests-pattern ^gcset$ --gc 701 --gc 42 --gc 11 --gc
9 -vv'
... .split())
>>> _ = testrunner.run_internal(defaults)
Traceback (most recent call last):
...
SystemExit: 1

>>> sys.stdout = stdout

>>> print out.getvalue()
Too many --gc options

Garbage Collection Statistics
-----------------------------

You can enable gc debugging statistics using the --gc-options (-G)
option. You should provide names of one or more of the flags
described in the library documentation for the gc module.

The output statistics are written to standard error.

>>> from StringIO import StringIO
>>> err = StringIO()
>>> stderr = sys.stderr
>>> sys.stderr = err
>>> sys.argv = ('test --tests-pattern ^gcstats$ -G DEBUG_STATS'
... ' -G DEBUG_COLLECTABLE -vv'
... .split())
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:
Set up zope.testing.testrunner.layer.UnitTests in N.NNN seconds.
Running:
generate_some_gc_statistics (gcstats)
Ran 1 tests with 0 failures and 0 errors in N.NNN seconds.
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.

>>> sys.stderr = stderr

>>> print err.getvalue() # doctest: +ELLIPSIS
gc: collecting generation ...

Debugging Memory Leaks
======================

The --report-refcounts (-r) option can be used with the --repeat (-N)
option to detect and diagnose memory leaks. To use this option, you
must configure Python with the --with-pydebug option. (On Unix, pass
this option to configure and then build Python.)

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... ]

>>> from zope.testing import testrunner

>>> sys.argv = 'test --layer Layer11$ --layer Layer12$ -N4 -r'.split()
>>> _ = testrunner.run(defaults)
Running samplelayers.Layer11 tests:
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer11 in 0.000 seconds.
Iteration 1
Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
Iteration 2
Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
sys refcount=100401 change=0
Iteration 3
Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
sys refcount=100401 change=0
Iteration 4
Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
sys refcount=100401 change=0
Running samplelayers.Layer12 tests:
Tear down samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer12 in 0.000 seconds.
Iteration 1
Ran 34 tests with 0 failures and 0 errors in 0.013 seconds.
Iteration 2
Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
sys refcount=100411 change=0
Iteration 3
Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
sys refcount=100411 change=0
Iteration 4
Ran 34 tests with 0 failures and 0 errors in 0.012 seconds.
sys refcount=100411 change=0
Tearing down left over layers:
Tear down samplelayers.Layer12 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.
Total: 68 tests, 0 failures, 0 errors in N.NNN seconds.

Each layer is repeated the requested number of times. For each
iteration after the first, the system refcount and change in system
refcount is shown. The system refcount is the total of all refcount in
the system. When a refcount on any object is changed, the system
refcount is changed by the same amount. Tests that don't leak show
zero changes in systen refcount.

Let's look at an example test that leaks:

>>> sys.argv = 'test --tests-pattern leak -N4 -r'.split()
>>> _ = testrunner.run(defaults)
Running zope.testing.testrunner.layer.UnitTests tests:...
Iteration 1
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Iteration 2
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sys refcount=92506 change=12
Iteration 3
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sys refcount=92513 change=12
Iteration 4
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sys refcount=92520 change=12
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.

Here we see that the system refcount is increating. If we specify a
verbosity greater than one, we can get details broken out by object
type (or class):

>>> sys.argv = 'test --tests-pattern leak -N5 -r -v'.split()
>>> _ = testrunner.run(defaults)
Running tests at level 1
Running zope.testing.testrunner.layer.UnitTests tests:...
Iteration 1
Running:
.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Iteration 2
Running:
.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sum detail refcount=95832 sys refcount=105668 change=16
Leak details, changes in instances and refcounts by type/class:
type/class insts refs
------------------------------------------------------- ----- ----
classobj 0 1
dict 2 2
float 1 1
int 2 2
leak.ClassicLeakable 1 1
leak.Leakable 1 1
str 0 4
tuple 1 1
type 0 3
------------------------------------------------------- ----- ----
total 8 16
Iteration 3
Running:
.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sum detail refcount=95844 sys refcount=105680 change=12
Leak details, changes in instances and refcounts by type/class:
type/class insts refs
------------------------------------------------------- ----- ----
classobj 0 1
dict 2 2
float 1 1
int -1 0
leak.ClassicLeakable 1 1
leak.Leakable 1 1
str 0 4
tuple 1 1
type 0 1
------------------------------------------------------- ----- ----
total 5 12
Iteration 4
Running:
.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sum detail refcount=95856 sys refcount=105692 change=12
Leak details, changes in instances and refcounts by type/class:
type/class insts refs
------------------------------------------------------- ----- ----
classobj 0 1
dict 2 2
float 1 1
leak.ClassicLeakable 1 1
leak.Leakable 1 1
str 0 4
tuple 1 1
type 0 1
------------------------------------------------------- ----- ----
total 6 12
Iteration 5
Running:
.
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
sum detail refcount=95868 sys refcount=105704 change=12
Leak details, changes in instances and refcounts by type/class:
type/class insts refs
------------------------------------------------------- ----- ----
classobj 0 1
dict 2 2
float 1 1
leak.ClassicLeakable 1 1
leak.Leakable 1 1
str 0 4
tuple 1 1
type 0 1
------------------------------------------------------- ----- ----
total 6 12
Tearing down left over layers:
Tear down zope.testing.testrunner.layer.UnitTests in N.NNN seconds.

It is instructive to analyze the results in some detail. The test
being run was designed to intentionally leak:

class ClassicLeakable:
def __init__(self):
self.x = 'x'

class Leakable(object):
def __init__(self):
self.x = 'x'

leaked = []

class TestSomething(unittest.TestCase):

def testleak(self):
leaked.append((ClassicLeakable(), Leakable(), time.time()))

Let's go through this by type.

float, leak.ClassicLeakable, leak.Leakable, and tuple
We leak one of these every time. This is to be expected because
we are adding one of these to the list every time.

str
We don't leak any instances, but we leak 4 references. These are
due to the instance attributes avd values.

dict
We leak 2 of these, one for each ClassicLeakable and Leakable
instance.

classobj
We increase the number of classobj instance references by one each
time because each ClassicLeakable instance has a reference to its
class. This instances increases the references in it's class,
which increases the total number of references to classic classes
(clasobj instances).

type
For most interations, we increase the number of type references by
one for the same reason we increase the number of clasobj
references by one. The increase of the number of type references
by 3 in the second iteration is puzzling, but illustrates that
this sort of data is often puzzling.

int
The change in the number of int instances and references in this
example is a side effect of the statistics being gathered. Lots
of integers are created to keep the memory statistics used here.

The summary statistics include the sum of the detail refcounts. (Note
that this sum is less than the system refcount. This is because the
detailed analysis doesn't inspect every object. Not all objects in the
system are returned by sys.getobjects.)

Knitting in extra package directories
=====================================

Python packages have __path__ variables that can be manipulated to add
extra directories cntaining software used in the packages. The
testrunner needs to be given extra information about this sort of
situation.

Let's look at an example. The testrunner-ex-knit-lib directory
is a directory that we want to add to the Python path, but that we
don't want to search for tests. It has a sample4 package and a
products subpackage. The products subpackage adds the
testrunner-ex-knit-products to it's __path__. We want to run tests
from the testrunner-ex-knit-products directory. When we import these
tests, we need to import them from the sample4.products package. We
can't use the --path option to name testrunner-ex-knit-products.
It isn't enough to add the containing directory to the test path
because then we wouldn't be able to determine the package name
properly. We might be able to use the --package option to run the
tests from the sample4/products package, but we want to run tests in
testrunner-ex that aren't in this package.

We can use the --package-path option in this case. The --package-path
option is like the --test-path option in that it defines a path to be
searched for tests without affecting the python path. It differs in
that it supplied a package name that is added a profex when importing
any modules found. The --package-path option takes *two* arguments, a
package name and file path.

>>> import os.path, sys
>>> directory_with_tests = os.path.join(this_directory, 'testrunner-ex')
>>> sys.path.append(os.path.join(this_directory, 'testrunner-ex-pp-lib'))
>>> defaults = [
... '--path', directory_with_tests,
... '--tests-pattern', '^sampletestsf?$',
... '--package-path',
... os.path.join(this_directory, 'testrunner-ex-pp-products'),
... 'sample4.products',
... ]

>>> from zope.testing import testrunner

>>> sys.argv = 'test --layer Layer111 -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in 0.000 seconds.
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer111 in 0.000 seconds.
Running:
test_x1 (sample1.sampletests.test111.TestA)
test_y0 (sample1.sampletests.test111.TestA)
...
test_y0 (sampletests.test111)
test_z1 (sampletests.test111)
testrunner-ex/sampletests/../sampletestsl.txt
test_extra_test_in_products (sample4.products.sampletests.Test)
test_another_test_in_products (sample4.products.more.sampletests.Test)
Ran 36 tests with 0 failures and 0 errors in 0.008 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer111 in 0.000 seconds.
Tear down samplelayers.Layerx in 0.000 seconds.
Tear down samplelayers.Layer11 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.

In the example, the last test, test_extra_test_in_products, came from
the products directory. As usual, we can select the knit-in packages
or individual packages within knit-in packages:

>>> sys.argv = 'test --package sample4.products -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in 0.000 seconds.
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer111 in 0.000 seconds.
Running:
test_extra_test_in_products (sample4.products.sampletests.Test)
test_another_test_in_products (sample4.products.more.sampletests.Test)
Ran 2 tests with 0 failures and 0 errors in 0.000 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer111 in 0.000 seconds.
Tear down samplelayers.Layerx in 0.000 seconds.
Tear down samplelayers.Layer11 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.

>>> sys.argv = 'test --package sample4.products.more -vv'.split()
>>> _ = testrunner.run_internal(defaults)
Running tests at level 1
Running samplelayers.Layer111 tests:
Set up samplelayers.Layerx in 0.000 seconds.
Set up samplelayers.Layer1 in 0.000 seconds.
Set up samplelayers.Layer11 in 0.000 seconds.
Set up samplelayers.Layer111 in 0.000 seconds.
Running:
test_another_test_in_products (sample4.products.more.sampletests.Test)
Ran 1 tests with 0 failures and 0 errors in 0.000 seconds.
Tearing down left over layers:
Tear down samplelayers.Layer111 in 0.000 seconds.
Tear down samplelayers.Layerx in 0.000 seconds.
Tear down samplelayers.Layer11 in 0.000 seconds.
Tear down samplelayers.Layer1 in 0.000 seconds.

Parsing HTML Forms
==================

Sometimes in functional tests, information from a generated form must
be extracted in order to re-submit it as part of a subsequent request.
The `zope.testing.formparser` module can be used for this purpose.

The scanner is implemented using the `FormParser` class. The
constructor arguments are the page data containing the form and
(optionally) the URL from which the page was retrieved:

>>> import zope.testing.formparser

>>> page_text = '''\
... <html><body>
... <form name="form1" action="/cgi-bin/foobar.py" method="POST">
... <input type="hidden" name="f1" value="today" />
... <input type="submit" name="do-it-now" value="Go for it!" />
... <input type="IMAGE" name="not-really" value="Don't."
... src="dont.png" />
... <select name="pick-two" size="3" multiple>
... <option value="one" selected>First</option>
... <option value="two" label="Second">Another</option>
... <optgroup>
... <option value="three">Third</option>
... <option selected="selected">Fourth</option>
... </optgroup>
... </select>
... </form>
...
... Just for fun, a second form, after specifying a base:
... <base href="http://www.example.com/base/" />
... <form action = 'sproing/sprung.html' enctype="multipart/form">
... <textarea name="sometext" rows="5">Some text.</textarea>
... <input type="Image" name="action" value="Do something."
... src="else.png" />
... <input type="text" value="" name="multi" size="2" />
... <input type="text" value="" name="multi" size="3" />
... </form>
... </body></html>
... '''

>>> parser = zope.testing.formparser.FormParser(page_text)
>>> forms = parser.parse()

>>> len(forms)
2
>>> forms.form1 is forms[0]
True
>>> forms.form1 is forms[1]
False

More often, the `parse()` convenience function is all that's needed:

>>> forms = zope.testing.formparser.parse(
... page_text, "http://cgi.example.com/somewhere/form.html")

>>> len(forms)
2
>>> forms.form1 is forms[0]
True
>>> forms.form1 is forms[1]
False

Once we have the form we're interested in, we can check form
attributes and individual field values:

>>> form = forms.form1
>>> form.enctype
'application/x-www-form-urlencoded'
>>> form.method
'post'

>>> keys = form.keys()
>>> keys.sort()
>>> keys
['do-it-now', 'f1', 'not-really', 'pick-two']

>>> not_really = form["not-really"]
>>> not_really.type
'image'
>>> not_really.value
"Don't."
>>> not_really.readonly
False
>>> not_really.disabled
False

Note that relative URLs are converted to absolute URLs based on the
``<base>`` element (if present) or using the base passed in to the
constructor.

>>> form.action
'http://cgi.example.com/cgi-bin/foobar.py'
>>> not_really.src
'http://cgi.example.com/somewhere/dont.png'

>>> forms[1].action
'http://www.example.com/base/sproing/sprung.html'
>>> forms[1]["action"].src
'http://www.example.com/base/else.png'

Fields which are repeated are reported as lists of objects that
represent each instance of the field::

>>> field = forms[1]["multi"]
>>> type(field)
<type 'list'>
>>> [o.value for o in field]
['', '']
>>> [o.size for o in field]
[2, 3]

The ``<textarea>`` element provides some additional attributes:

>>> ta = forms[1]["sometext"]
>>> print ta.rows
5
>>> print ta.cols
None
>>> ta.value
'Some text.'

The ``<select>`` element provides access to the options as well:

>>> select = form["pick-two"]
>>> select.multiple
True
>>> select.size
3
>>> select.type
'select'
>>> select.value
['one', 'Fourth']

>>> options = select.options
>>> len(options)
4
>>> [opt.label for opt in options]
['First', 'Second', 'Third', 'Fourth']
>>> [opt.value for opt in options]
['one', 'two', 'three', 'Fourth']

Stack-based test doctest setUp and tearDown
============================================

Writing doctest setUp and tearDown functions can be a bit tedious,
especially when setUp/tearDown functions are combined.

the zope.testing.setupstack module provides a small framework for
automating test tear down. It provides a generic setUp function that
sets up a stack. Normal test setUp functions call this function to set
up the stack and then use the register function to register tear-down
functions.

To see how this works we'll create a faux test:

>>> class Test:
... def __init__(self):
... self.globs = {}
>>> test = Test()

We'll register some tearDown functions that just print something:

>>> import sys
>>> import zope.testing.setupstack
>>> zope.testing.setupstack.register(
... test, lambda : sys.stdout.write('td 1\n'))
>>> zope.testing.setupstack.register(
... test, lambda : sys.stdout.write('td 2\n'))

Now, when we call the tearDown function:

>>> zope.testing.setupstack.tearDown(test)
td 2
td 1

The registered tearDown functions are run. Note that they are run in
the reverse order that they were registered.


Extra positional arguments can be passed to register:

>>> zope.testing.setupstack.register(
... test, lambda x, y, z: sys.stdout.write('%s %s %s\n' % (x, y, z)),
... 1, 2, z=9)
>>> zope.testing.setupstack.tearDown(test)
1 2 9


Temporary Test Directory
------------------------

Often, tests create files as they demonstrate functionality. They
need to arrange for the removeal of these files when the test is
cleaned up.

The setUpDirectory function automates this. We'll get the current
directory first:

>>> import os
>>> here = os.getcwd()

We'll also create a new test:

>>> test = Test()

Now we'll call the setUpDirectory function:

>>> zope.testing.setupstack.setUpDirectory(test)

We don't have to call zope.testing.setupstack.setUp, because
setUpDirectory calls it for us.

Now the current working directory has changed:

>>> here == os.getcwd()
False

We can create files to out heart's content:

>>> open('Data.fs', 'w').write('xxx')
>>> os.path.exists('Data.fs')
True

We'll make the file read-only. This can cause problems on Windows, but
setupstack takes care of that by making files writable before trying
to remove them.

>>> import stat
>>> os.chmod('Data.fs', stat.S_IREAD)

When tearDown is called:

>>> zope.testing.setupstack.tearDown(test)

We'll be back where we started:

>>> here == os.getcwd()
True

and the files we created will be gone (along with the temporary
directory that was created:

>>> os.path.exists('Data.fs')
False

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zope.testing-3.8.7.tar.gz (191.2 kB view hashes)

Uploaded source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page