A simple cli-based recorder for Python
Automated, comprehensive and clean pytest test cases.
Helps you reach 100% test coverage with real-world test cases. Tests that are generated "just work", i.e they are clean, unaware of implementation details, and doesn't require active maintenance.
It works well if your functions are deterministic (e.g pure).
If not, then you should probably make them so!
It's DDT for pytest (development-driven testing :nerd_face:)
Too tedious/hard to generate custom data for your application?
- save time by having tests generated for you :tada:
- dramatically increase test code coverage with little effort
- write more maintainable tests by separating code and data
- helps you organise your test code consistently in new projects, or:
- replace your existing disorganised test code :+1:
Using in normal runs
Using while running tests
then, confirm that your test cases expected value are correct.
finally, install it only as a test dependency.
See botostubs going from 0 to 99% under 20 seconds!
Releasing on PyPI
Enter password when prompted.
Running over and over write test cases in new files to avoid overwriting your previous test cases. The filenames are appended with -00, -01, ... for up to 10 files.
If your function arguments are not serialisable, then test cases won't be generated. You will see an error in the logs for that function.
section still rough; personal notes
-m simple_test_generator isn't dropping in debugger. It works when using as a library
with Recorder(): ...
- It won't be able to support all types of data, e.g asserting functions that return other functions. See section on how to handle this.
- This project uses pickling to load the test data. If you're the one generated test data, then it should be fine loading it during tests. Otherwise, don't load untrusted test data.
- Handles functions that return generators by automatically extending these into Python lists so that they can be asserted.
- Minor issue: functions in your main module may be loaded twice, creating identical test cases twice for that function.
- Support functions that return functions. We need a reliable way to assert equivalence between a function and its deserialised representation.
These projects resemble this one but mine requires much less effort on your part and generates even less boilerplate :blush:
What if you have test failures?
It may be due to a bug in simple_test_generator but it's probably because it's difficult to serialise all data types, e.g file descriptors.
Cases that you should handle on your own:
- Assertion does not work properly on your objects
You should define an
__eq__function in your class. This will ensure that simple_test_generator asserts the return values properly.
How to report in issue
- Raise an issue here about the test failure (or upvote an existing one)
- Paste your function code (or its signature)
- Paste the json file that simple_test_generator generated, e.g
- State what you were expecting
- State what happened instead
Released under the MIT licence. See file named LICENCE for details.
Release history Release notifications
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size simple_test_generator-0.18-py2.py3-none-any.whl (11.2 kB)||File type Wheel||Python version py2.py3||Upload date||Hashes View hashes|
|Filename, size simple_test_generator-0.18.tar.gz (49.1 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for simple_test_generator-0.18-py2.py3-none-any.whl
Hashes for simple_test_generator-0.18.tar.gz