Light framework to setup back-to-back test
Project description
b2btest - Light framework to setup back-to-back test scripts
This package is helpfull to prepare and manage back-to-back tests to get control over software you don't have proper tests yet.
On one side, it includes back2back
, a comand line tool
that performs back-to-back tests on any software
that can be called by command line and produces
some output files you want to get warned when they get different.
You can define the set of commands and the outputs to watch in a simple
YAML file.
On the other side, it includes comparision functions and python-unittests like asserts you can use within your test suites that perform similar workflows, keeping the reference outputs in a folder and named by the testname.
Both the command line tool and the python libraries, are extendable with plugins to get informative diff for several file formats: audio, xml, text, pdf...
Why back-to-back testing
A Back-to-back tests is a black box tests that just compares that, given an input, you have the same output all the time. Unit testing and Test Driven Development are a quite more preferable strategy to test a piece of software. But often we need to change a piece of software which has been developed without proper testing. A quick way to get control over it is setting up a set of back-to-back tests and then proceeding with the refactoring with a larger level of confidence than having no test at all.
Note of warning: Don't feel too confident by being backed by back2back tests. It is black-box testing, so it is hard to ensure full coverage. You may be changing a behaviour which is not exercised by the b2b test, and not noticing.
Easing the workflow with b2btest
When b2b tests are hard to run and maintain, they use to get old and useless. This script automates most tedious back2back related task such as setting up, verifying results, accepting changes, clearing data...
Features:
- You can put under back2back testing either
- the outcomes (files) of any shell command line (with pipes and so on), or
- any serializable data in a
unittest.TestCase
method
- It is auto-checked, like most Xunit frameworks
- It automagically manages the expectation data
- On failure, it generates handy data to evaluate the changes by providing diffs and keeping failed and expected data for reference.
- Provides a handy command line to accept failed results as proper ones.
- When the test turns green or it is accepted all failure related data gets cleared.
- Comparators and diff generators can be added for your own file type.
- You can set architecture dependant outputs for the same test.
How to install
Just use:
$ pip install b2btest
In order to properly diff audio files you should consider installing the audio extension
$ pip install b2btest_audio
Dependencies
For xml files, it requires lxml module and both libxml2 and libxslt libraries installed
Back2Back testing in your unittest
import unittest
import b2btest # Not used but load a new assertion method for TestCase
import datetime
class MyTest(unittest.TestCase):
def test_this(self):
self.assertB2BEqual("data")
def test_that_willallwaysfail(self):
self.assertB2BEqual(str(datetime.datetime.now()))
if __name__ == '__main__':
# acceptMode attribute makes the assert accept the results
# expectation as new
if '--accept' in sys.argv:
sys.argv.remove('--accept')
unittest.TestCase.acceptMode = True
unittest.main()
Back2Back testing of cli programs
When you are testing back-to-back the output of a command line, you define a yaml file like this (name it b2bcases.yaml).
#!/usr/bin/env back2back
datapath: "b2bdata" # Point it to the directory containing your reference data
testcases:
HelloWorld:
command: echo Hello World > output.txt
outputs:
- output.txt
AlwaysChanging:
command: date > output.txt
outputs:
- output.txt
Generate1KHzSine: # Requires the b2btest_audio package
command: sox -n /tmp/sine.wav synth 1.0 sine 1000.0
outputs:
- /tmp/sine.wav
To list the available test cases:
back2back b2bcases.yaml --list
To run them:
back2back b2bcases.yaml
The first time you run a test case, it will fail as there is no expectation, you must to check it is valid and accept it. Successive failures means that the behaviour has change. You can accept the new result if the behavioural change is expected.
To accept a concrete case:
back2back b2bcases.yaml --accept HelloWorld
To accept all failing tests:
back2back b2bcases.yaml --acceptall
If some output depends on the computer architecture or in the platform (windows, mac...)
use the --arch
and --platform
options when accepting.
It will generate an independent expectation file for such architecture or platform.
back2back b2bcases.yaml --accept HelloWorld --arch
Old inteface
If you want to generate the test cases progamaticaly, you still are able to use the old python interface. Instead of a yaml file, write python script like this:
Just like in this b2b script does:
#!/usr/bin/python
import sys
from b2btest import runBack2BackProgram
data_path="path/to/b2bdata"
back2BackTests = [
("myprogram_with_option",
"./myprogram --option input1.wav input2.wav output.wav otheroutput.wav ",
[
"output.wav",
"otheroutput.wav",
]),
("myprogram_with_other_option",
"./myprogram --other-option input1.wav input2.wav output.wav ignoredoutput.wav ",
[
"output.wav",
]),
]
runBack2BackProgram(data_path, sys.argv, back2BackTests)
Save this file as back2back.py
, for example, and make it executable.
Use the python script directly with the same command line but without the yaml file.
Extra advices
Use continuous integration
Put your tests under a continuous integration system such
- Travis-CI
- BuildBot
- TestFarm
- CDash
You might be lazy passing tests but bots aren't. Connect your bots to your VCS so they test for every commit.
Keep your expectations up to date
If one b2b test gets red, don't keep it for long, either accept it or roll-back your code. b2b detect changes, but if you are in a change you won't notice whether a second one happens. If your expectation data is backed by a version control system dare to accept wrong expectation data until you fix it. But don't forget.
Change log
1.4.0
- Audio plugin separated from this code base as
b2btest_audio
1.3.3
- Simplified dependency on lxml
1.3.2
diffaudio
as console scriptdiffxml
as console script- Fix: entry points for xml and audio plugins
- Just markdown README
1.3.1
- Updated README
1.3
- Avoid larg diffs by telling just the generated file with the failed results
- Fix unicode problems in certain python versions
- Using older lxml versions for python<3.5
1.2
- CLI: Fix: only the first output was actually checked
- Plugin based type sensitive diff
- Specific diff for XML
- XML and Audio diffing now are extras
- 'extensions' key in yaml testcases to associate custom file extensions to a diff plugin
1.1
- Unit test like usage for back-to-back test Python code instead of command line programs.
- New commandline tool
back2back
that takes a yaml file with the test cases definitions.
1.0
- First github version
- (There were previous unpublished versions)
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file b2btest-1.4.0.tar.gz
.
File metadata
- Download URL: b2btest-1.4.0.tar.gz
- Upload date:
- Size: 14.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.0 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 93dbbca5ed223c3a4ca663205208c6b51280ade15e2006679b631b03f63a1f3a |
|
MD5 | 7ca01e45088cbb2daf42d741231e0f08 |
|
BLAKE2b-256 | ed81b6d9d600604b0c31857b42884c3927ae1ab0d7526b2a3da77a7603a2d133 |
File details
Details for the file b2btest-1.4.0-py3-none-any.whl
.
File metadata
- Download URL: b2btest-1.4.0-py3-none-any.whl
- Upload date:
- Size: 12.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.7.1 importlib_metadata/4.10.0 pkginfo/1.8.2 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e842e0a39986e21869236611929e69287987b5e55988bd2c1c53ab0a4acdd505 |
|
MD5 | 1da66cb1095e318b9021c24a2d3160a7 |
|
BLAKE2b-256 | 72c05b66f2da48a8d2052b4cc8afa3a8672278740f9ea38650ddfea85e186397 |