Skip to main content

Declarative API over unittest with customizable auto-discovery, test lifecycle and handy integrations

Project description

thinking-tests

CI PyPI version

Part of thinking family.

Declarative API over unittest with customizable auto-discovery and test lifecycle.

Requires python 3.12. Is mostly typed.

What started as fluent, decorator-based API over unittest, grew into a facade that uses unittest as testing backend, while providing bunch of report and integrating coverage too.

Is heavily based on thinking framework pieces, so you better get acquainted with thinking-runtime.

Usage

Declaring tests

Put your tests into package lying in repository root. This assumption is important for discovery, but also a good practice.

For this part you need decorators module:

from thinking_tests.decorators import case, setup, teardown

You declare test cases with decorator:

@case
def my_case():
    assert 1 + 1 == 2 

You can tweak setup and teardown with context managers:

def my_setup():
    ...

def my_teardown():
    ...

with setup(my_setup), teardown(my_teardown):
    @case
    def my_case():
        ...

Running tests

Use the __name__ == "__main__" idiom and run_(all|current_(module|package))() functions (from thinking_tests.running.start module).

  • run_all() will scan the current root package for test cases and run them all
    • if you call that function from pkg.subpkg.module, it will scan every module (at any depth) in pkg package
  • run_current_package() will do similar thing, but will run all the tests in the same package (and lower) as from where you call it
    • e.g. if you have tests in pkg.sub.sub1.mod and pkg.sub.sub2.mod and call it from pkg.sub.run, it will pick up both these modules, but not cases defined in pkg.another.mod
  • run_current_module() will only run cases defined in the module where it is called

See test_fixture for an example usage - x and y modules will use if __name__=="__main__": run_current_module(), while run_all will have if __name__=="__main__": run_all(). That way you can have x and y suites, while being able to run all available tests with python -m test_fixture.run_all.

Reporting

thinking-tests come with JUnit XML and HTML reports, Coverage data files, XML reports and HTML reports out of the box.

By default all of them are enabled. Tha way you're set up for CI (which may consume unittest XML report and Coverage binary/XML report) as well as local development (where you probably wanna see report in nice, webbrowser-based UI).

Great kudos to vjunit and junit_xml authors, from which I stole the code before tweaking it for smoother experience.

Configuration

As mentioned, configuration is based on thinking-runtime bootstrapping mechanism. You can define your own __test__/__tests__/__testing__ config file, where you interact with thinking_tests.running.test_config.test_config object.

It has 2 properties:

  • unittest
    • exposes 2 str properties:
      • xml_report_path
      • html_report_path
    • both of them are resolved against repository root, if they are not None and are not absolute paths
    • if they are None, appropriate report is turned off
    • if XML report is disabled, HTML report must be disabled, or it will be an error
    • there are also (xml|html)_report_enabled and simply enabled properties
      • they have getters
      • they also have setters, but if you pass True, it will be an error - use them only to quickly turn off appropriate report
      • (...).enabled = False will set None to both paths
  • coverage
    • exposes 3 str properties:
      • binary_report_path - being the Coverage SQLite data file path
      • xml_report_path
      • html_report_dir - notice that it points to a directory, not a single file
    • they are also resolved against repo root, same as with unittest, and they are interpreted in the same fashion when they are None
    • binary report must be enabled for other reports to be enabled, or you'll get an error
    • you'll also find (binary|xml|html)_report_enabled and just enabled properties that behave similarly as with unittest
    • there are also properties passed directly to Coverage
      • they are:
        • branch: Optional[bool]
        • 'include: Optional[str | Iterable[str]]'
        • 'omit: Optional[str | Iterable[str]]'
      • they must be None (default) if binary report is disabled

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thinking_tests-0.0.3.tar.gz (28.1 kB view details)

Uploaded Source

Built Distribution

thinking_tests-0.0.3-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file thinking_tests-0.0.3.tar.gz.

File metadata

  • Download URL: thinking_tests-0.0.3.tar.gz
  • Upload date:
  • Size: 28.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for thinking_tests-0.0.3.tar.gz
Algorithm Hash digest
SHA256 499b9ef0cce00b86af7fa74c1fba48a5b05a2303b718e2c6f02fcbb702d7649e
MD5 0ba5b41d1a0b5d9294b2e7d4c7afb763
BLAKE2b-256 126209a3116f5a2af8bb571d7b790cfc3487598525cbb2f918b3080099a00430

See more details on using hashes here.

File details

Details for the file thinking_tests-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for thinking_tests-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 8717b1163aec5448f02c36d03fd7e7ac404b8e33fb5d288a34416ecb50aab39c
MD5 4b0005a57c4e7205e861a14d42a3c9b3
BLAKE2b-256 72184137dbf21c367ef29be4a7e8042748851d11fb07c9173fc4e4878ddb14da

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page