Skip to main content

Wrapper for Great Expectations to fit the requirements of the Gemeente Amsterdam.

Project description

About dq-suite-amsterdam

This repository aims to be an easy-to-use wrapper for the data quality library Great Expectations (GX). All that is needed to get started is an in-memory Spark dataframe and a set of data quality rules - specified in a JSON file of particular formatting.

By default, all the validation results are written to Unity Catalog of DMT (dpd1_prd). Data Team User or Service principle (SPN) which runs jobs/notebook will be given access to DMT catalog to write results in data_quality schema. Based on the results, DQ reports can be viewed in power bi reports hosted by DMT. Alternatively, one could disallow writing to a data_quality schema in UC, which one has to create once per catalog via this notebook. Additionally, users can choose to get notified via Slack or Microsoft Teams.

DISCLAIMER: The package is in MVP phase, so watch your step.

How to contribute

Want to help out? Great! Feel free to create a pull request addressing one of the open issues. Some notes for developers are located here.

Found a bug, or need a new feature? Add a new issue describing what you need.

Getting started

Following GX, we recommend installing dq-suite-amsterdam in a virtual environment. This could be either locally via your IDE, on your compute via a notebook in Databricks, or as part of a workflow.

  1. Run the following command:
pip install dq-suite-amsterdam
  1. Create the data_quality schema (and tables all results will be written to) by running the SQL notebook located here. All it needs is the name of the catalog - and the rights to create a schema within that catalog :)

  2. Get ready to validate your first table. To do so, define

  • dq_rule_json_path as a path to a JSON file, formatted in this way. Detailed description for defining the json can be found here
  • df as a Spark dataframe containing the table that needs to be validated (e.g. via spark.read.csv or spark.read.table)
  • spark as a SparkSession object (in Databricks notebooks, this is by default called spark)
  • catalog_name as the name of catalog where output of dq suite will be stored ('dpd1_dev' or 'dpd1_prd')
  • table_name as the name of the table for which a data quality check is required. This name should also occur in the JSON file at dq_rule_json_path
  1. Finally, perform the validation by running (note: the library is imported as dq_suite, not as dq_suite_amsterdam!)
from dq_suite.validation import run_validation

run_validation(
    json_path=dq_rule_json_path,
    df=df, 
    spark_session=spark,
    catalog_name=catalog_name,
    table_name=table_name,
)

Note: run_validation now returns a tuple as (validation_result, highest_severity_level):

validation_result → Boolean flag indicating overall success (True if all checks pass, False otherwise).

highest_severity_level → String indicating the highest severity among failed checks (one of 'fatal', 'error', 'warning', or 'ok').

See the documentation of dq_suite.validation.run_validation for what other parameters can be passed.

Geo Validation

Geo validation enables geometric checks using Databricks ST geospatial functions. It is fully integrated into the existing validation flow, allowing generic and geo rules to be applied together on the same table.

Geo validation can be used to validate, among others:

  • Whether geometry values are present and non-empty
  • Whether geometries are structurally valid (e.g. no invalid polygons)
  • Whether geometry values are of a specific geometry type (e.g. POINT, POLYGON)
  1. Databricks Runtime 17.1 and above must be applied on your Databricks cluster, as ST geospatial functions are only fully supported from this version onwards. For more details, https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-st-geospatial-functions

  2. When defining rules in Getting started → Step 3, you can enable geo validation by adding the parameter "rule_type": "geo" inside your JSON. Example is here

  3. Results of geo validation will be written into the same data_quality schema as generic validation. If a table includes both generic and geo rules, all results will be combined in the output tables.

Profiling

Profiling is the process of analyzing a dataset to understand its structure, patterns, and data quality characteristics (such as completeness, uniqueness, or value distributions).

The profiling functionality in dq_suite generates profiling results and automatically produces a rules.json file, which can be used as input for the validation—making it easier to gain insights and validate data quality.

  1. Run the following command:
pip install dq-suite-amsterdam
    1. Create the data_quality schema (and profiling tables that store profiling results) by running the SQL notebook located here. All it needs is the name of the catalog and the rights to create a schema within that catalog. The catalog allows flexible usage across environments (e.g. dev, test, prod). This step will create the required profiling tables, including:
  • profilingtabel (table-level profiling results)
  • profilingattribuut (attribute-level profiling results)
  1. Get ready to profile your first table. To do so, define
  • df as a Panda dataframe containing the table that needs to be validated (e.g. via pd.read_csv)
  • generate_rules as a Boolean to generate dq_rule_json. Set to False if you only want profiling without rule generation
  • spark as a SparkSession object (in Databricks notebooks, this is by default called spark)
  • dq_rule_json_path as a path to a JSON file, wil be formatted in this way after running profiling function
  • dataset_name as the name of the input dataset used for profiling (e.g. 'dpxx_dev' or 'dpxx_prd'). This name will be placed in the JSON file at dq_rule_json_path
  • table_name as the name of the table for which a data quality check is required. This name will be placed in the JSON file at dq_rule_json_path
  • output_catalog_name as the name of the catalog where the profiling outputs will be stored ('dpd1_dev' or 'dpd1_prd')
  1. Finally, perform the profiling by running
from dq_suite.profile.profile import profile_and_create_rules

profile_and_create_rules(
    df=df,
    output_catalog_name=catalog_name,
    dataset_name=dataset_name,
    table_name=table_name,
    spark_session=spark,
    generate_rules=True,
    rule_path=dq_rule_json_path
)

Result of profiling

Profiling results are created in an HTML view. The rule.json file is created at the specified path(if generate_rules=True) This file can be edited to refine the rules according to your data validation needs. The JSON rule file can then be used as input for dq_suite validation. Profiling tables are created at the table level and include attributes of each table. Geographic rules, as described in the Geo Validation section, are automatically generated for geometry columns.

For further documentation, see:

Known exceptions / issues

  • The functions can run on Databricks using a Personal Compute Cluster or using a Job Cluster. Using a Shared Compute Cluster will result in an error, as it does not have the permissions that Great Expectations requires.

  • Since this project requires Python >= 3.10, the use of Databricks Runtime (DBR) >= 13.3 is needed (click). Older versions of DBR will result in errors upon install of the dq-suite-amsterdam library.

  • At time of writing (late Aug 2024), Great Expectations v1.0.0 has just been released, and is not (yet) compatible with Python 3.12. Hence, make sure you are using the correct version of Python as interpreter for your project.

  • The run_time value is defined separately from Great Expectations in validation.py. We plan on fixing this when Great Expectations has documented how to access it from the RunIdentifier object.

  • Profiling rules/Rule condition logic

Current profiling-based rule conditions are placeholders and should be defined and validated by the data teams to ensure they are generic and reusable.

  • When using Great Expectations with ResultFormat.COMPLETE, the unexpected_list is limited to a maximum of 200 values per expectation. This is a limitation imposed by Great Expectations.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dq_suite_amsterdam-0.14.2.tar.gz (53.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dq_suite_amsterdam-0.14.2-py3-none-any.whl (42.5 kB view details)

Uploaded Python 3

File details

Details for the file dq_suite_amsterdam-0.14.2.tar.gz.

File metadata

  • Download URL: dq_suite_amsterdam-0.14.2.tar.gz
  • Upload date:
  • Size: 53.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for dq_suite_amsterdam-0.14.2.tar.gz
Algorithm Hash digest
SHA256 21e390c47a277182861309623efe107745eeabca9dfb3e131005f5276885dead
MD5 56dae70faf1c1bb633061bced11ff065
BLAKE2b-256 45c6c11ee592e06e8472b29ecabf918229224e376abaa5a55db24bd63b11bc3b

See more details on using hashes here.

Provenance

The following attestation bundles were made for dq_suite_amsterdam-0.14.2.tar.gz:

Publisher: publish-to-pypi.yml on Amsterdam/dq-suite-amsterdam

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dq_suite_amsterdam-0.14.2-py3-none-any.whl.

File metadata

File hashes

Hashes for dq_suite_amsterdam-0.14.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5c00ce84e7de675fb9fcdb2878c8e6ecd377ff1f69838aff52f939b8fb63eddf
MD5 05fe939c9d114d557cdba1f4c8129af4
BLAKE2b-256 afc86c6bd9615249b1f37587754ee3dd6000839281fdd63611802d8786d35016

See more details on using hashes here.

Provenance

The following attestation bundles were made for dq_suite_amsterdam-0.14.2-py3-none-any.whl:

Publisher: publish-to-pypi.yml on Amsterdam/dq-suite-amsterdam

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page