Skip to main content

Lightweight assertions inspired by the great-expectations library

Project description

serialbandicoot flake8 Lint codecov CodeQL

This library is inspired by the Great Expectations library. The library has made the various expectations found in Great Expectations available when using the inbuilt python unittest assertions.

Install

pip install great-assertions

Code example Pandas

from great_assertions import GreatAssertions
import pandas as pd

class GreatAssertionTests(GreatAssertions):
    def test_expect_table_row_count_to_equal(self):
        df = pd.DataFrame({"col_1": [100, 200, 300], "col_2": [10, 20, 30]})
        self.expect_table_row_count_to_equal(df, 3)

Code example PySpark

from great_assertions import GreatAssertions
from pyspark.sql import SparkSession

class GreatAssertionTests(GreatAssertions):

    def setUp(self):
        self.spark = SparkSession.builder.getOrCreate()

    def test_expect_table_row_count_to_equal(self):
        df = self.spark.createDataFrame(
            [
                {"col_1": 100, "col_2": 10},
                {"col_1": 200, "col_2": 20},
                {"col_1": 300, "col_2": 30},
            ]
        )
        self.expect_table_row_count_to_equal(df, 3)

List of available assertions

Pandas

PySpark

expect_table_row_count_to_equal

white_check_mark::

white_check_mark::

expect_table_row_count_to_be_greater_than

white_check_mark::

white_check_mark::

expect_table_row_count_to_be_less_than

white_check_mark::

white_check_mark::

expect_table_has_no_duplicate_rows

white_check_mark::

white_check_mark::

expect_column_value_to_equal

white_check_mark::

white_check_mark::

expect_column_values_to_be_between

white_check_mark::

white_check_mark::

expect_column_values_to_match_regex

white_check_mark::

white_check_mark::

expect_column_values_to_be_in_set

white_check_mark::

white_check_mark::

expect_column_values_to_be_of_type

white_check_mark::

white_check_mark::

expect_table_columns_to_match_ordered_list

white_check_mark::

white_check_mark::

expect_table_columns_to_match_set

white_check_mark::

white_check_mark::

expect_date_range_to_be_more_than

white_check_mark::

white_check_mark::

expect_date_range_to_be_less_than

white_check_mark::

white_check_mark::

expect_date_range_to_be_between

white_check_mark::

white_check_mark::

expect_column_mean_to_be_between

white_check_mark::

white_check_mark::

expect_column_value_counts_percent_to_be_between

white_check_mark::

white_check_mark::

expect_frame_equal

white_check_mark::

white_check_mark::

expect_column_has_no_duplicate_rows

white_check_mark::

white_check_mark::

expect_column_value_to_equal_if

white_check_mark::

white_check_mark::

expect_column_value_to_be_greater_if

white_check_mark::

white_check_mark::

Assertion Descriptions

For a description of the assertions see Assertion Definitions

Running the tests

Executing the tests still require unittest, the following options have been tested with the examples provided.

Option 1

import unittest
suite = unittest.TestLoader().loadTestsFromTestCase(GreatAssertionTests)
runner = unittest.TextTestRunner(verbosity=2)
runner.run(suite)

Options 2

if __name__ == '__main__':
    unittest.main()

Pie Charts and Tables

For a more visual representation of the results, when using in Databricks or Jupyter Notebooks. The results can be outputted as tables or pie-chart.

import unittest
from great_assertions import GreatAssertionResult, GreatAssertions

class DisplayTest(GreatAssertions):
    def test_pass1(self):
        assert True is True

    def test_fail(self):
        assert "Hello" == "World"

suite = unittest.TestLoader().loadTestsFromTestCase(DisplayTest)
test_runner = unittest.runner.TextTestRunner(resultclass = GreatAssertionResult)
result = test_runner.run(suite)

result.to_barh() #Also available: result.to_pie()
Bar Horizonal
result.to_results_table()
Results Table
result.to_full_results_table()
Full Results Table

Runnng with XML-Runner

To run with xml-runner, there is no difference to how it’s currently used. However you will not be able to get method like to_results_table as these use a different resultclass

import xmlrunner
suite = unittest.TestLoader().loadTestsFromTestCase(DisplayTest)
test_runner = xmlrunner.XMLRunner(output="test-results")
test_runner.run(suite)

Production Monitoring

The assertions provided by GA will also allow the validation of the any environment including Production. Currently GA only supports saving the results to Spark, for example databricks.

Once the run has completed there is a save method, as seen below.

import xmlrunner
suite = unittest.TestLoader().loadTestsFromTestCase(DisplayTest)
test_runner = xmlrunner.XMLRunner(output="test-results")
result = test_runner.run(suite)
result.save(format="databricks")

The image below shows a simple graph of the accumulation of tests over test run. However much more complex analysis can be performed with the extended data being generated by GA.

No Tests Vs Test Run

The extended table of results contains the following:

run_id

timestamp

method

information

test_id

status

extended

20211222093029

2021-12-22 09:30:29

test_fail8

Traceback (most recent call last…

13

Fail

{“id”: 13, “name”: “expect_date_range_to_be_less_than”, “values”: {“expected_max_date”: “2019-05-13”, “actual_max_date”: “2019-05-13”}}

20211222093029

2021-12-22 09:30:29

test_fail9

Traceback (most recent call last…

14

Fail

{“id”: 14, “name”: “expect_date_range_to_be_more_than”, “values”: {“expected_min_date”: “2015-10-01”, “actual_min_date”: “2015-10-01”}}

From the extended column you can get further details about the type test, which was executed and the results. For example if we look at the test expect_table_row_count_to_be_less_than we should assert that the max row should not be breached.

In the code below, the expected was 100 and the actual was 205, which caused the test to fail. Therefore Analysts can query the extended data to get a picture of the size of the breach.

extended = {
    "id": 2,
    "name": expect_table_row_count_to_be_less_than,
    "values": {
        "exp_max_count": 100,
        "act_count": 205,
    },
}

In production monitoring these types of results can allow the prevention of skewed results. For example, if you had a result, where the expected values were withing a range of 0-100 and you got an exceptionally large value.

The large value could cause business functionality to be skewed such that a defect could causes damage or loss of income or incorrect reporting to a downstream system.

Therefore, GA will allow you to provide benchmarks to the production validation and an experienced analyst can create reports on top of the data.

An example of the extended dataset:

Extended Result Table

Notes

If you get an arrows function warning when running in Databricks, this will happen because a toPandas() method is being used for many of the assertions. The plan is to remove Pandas conversion for pure PySpark code. If this is an issue, please raise an issue so this method can be prioritised. For now, it’s advisable to make sure the datasets are not too big, which cause the driver to crash.

Development

To create a development environment, create a virtualenv and make a development installation

virtualenv ve
source ve/bin/activate

To run tests, just use pytest

(ve) pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

great-assertions-0.0.75.tar.gz (24.4 kB view details)

Uploaded Source

Built Distribution

great_assertions-0.0.75-py3-none-any.whl (18.5 kB view details)

Uploaded Python 3

File details

Details for the file great-assertions-0.0.75.tar.gz.

File metadata

  • Download URL: great-assertions-0.0.75.tar.gz
  • Upload date:
  • Size: 24.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for great-assertions-0.0.75.tar.gz
Algorithm Hash digest
SHA256 a601448e9c88cc18f0676e48423530334b8e2b7106cc67bab3151c50edce359a
MD5 ef0a0ade6c6155e1fcc4c05f2ffd170a
BLAKE2b-256 554972e5234e3cd96753b4164f5981db64ddbbbd925c07e4ef271368fbd34c9f

See more details on using hashes here.

File details

Details for the file great_assertions-0.0.75-py3-none-any.whl.

File metadata

  • Download URL: great_assertions-0.0.75-py3-none-any.whl
  • Upload date:
  • Size: 18.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.6

File hashes

Hashes for great_assertions-0.0.75-py3-none-any.whl
Algorithm Hash digest
SHA256 5eb64be29a5563ac89d75377d22697f841b33e9551e4ae15f5b3e48cf707cf42
MD5 e50bb6701ebb5f80aefd5979f8fe98f3
BLAKE2b-256 aaa7c5091e8907851d2bc2f0a66b2f0f84c084ae73332056ddca53dc684b0cac

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page