Skip to main content

Wrapper for Great Expectations to fit the requirements of the Gemeente Amsterdam.

Project description

About dq-suite-amsterdam

This repository aims to be an easy-to-use wrapper for the data quality library Great Expectations (GX). All that is needed to get started is an in-memory Spark dataframe and a set of data quality rules - specified in a JSON file of particular formatting.

By default, all the validation results are written to Unity Catalog. Alternatively, one could disallow writing to a data_quality schema in UC, which one has to create once per catalog via this notebook. Additionally, users can choose to get notified via Slack or Microsoft Teams.

DISCLAIMER: The package is in MVP phase, so watch your step.

How to contribute

Want to help out? Great! Feel free to create a pull request addressing one of the open issues. Some notes for developers are located here.

Found a bug, or need a new feature? Add a new issue describing what you need.

Getting started

Following GX, we recommend installing dq-suite-amsterdam in a virtual environment. This could be either locally via your IDE, on your compute via a notebook in Databricks, or as part of a workflow.

  1. Run the following command:
pip install dq-suite-amsterdam
  1. Create the data_quality schema (and tables all results will be written to) by running the SQL notebook located here. All it needs is the name of the catalog - and the rights to create a schema within that catalog :)

  2. Get ready to validate your first table. To do so, define

  • dq_rule_json_path as a path to a JSON file, formatted in this way
  • df as a Spark dataframe containing the table that needs to be validated (e.g. via spark.read.csv or spark.read.table)
  • spark as a SparkSession object (in Databricks notebooks, this is by default called spark)
  • catalog_name as the name of your catalog ('dpxx_dev' or 'dpxx_prd')
  • table_name as the name of the table for which a data quality check is required. This name should also occur in the JSON file at dq_rule_json_path
  1. Finally, perform the validation by running (note: the library is imported as dq_suite, not as dq_suite_amsterdam!)
from dq_suite.validation import run_validation

run_validation(
    json_path=dq_rule_json_path,
    df=df, 
    spark_session=spark,
    catalog_name=catalog_name,
    table_name=table_name,
)

Note: run_validation now returns a tuple as (validation_result, highest_severity_level):

validation_result → Boolean flag indicating overall success (True if all checks pass, False otherwise).

highest_severity_level → String indicating the highest severity among failed checks (one of 'fatal', 'error', 'warning', or 'ok').

See the documentation of dq_suite.validation.run_validation for what other parameters can be passed.

Geo Validation

Geo validation enables geometric checks using Databricks ST geospatial functions. It is fully integrated into the existing validation flow, allowing generic and geo rules to be applied together on the same table.

Geo validation can be used to validate, among others:

  • Whether geometry values are present and non-empty
  • Whether geometries are structurally valid (e.g. no invalid polygons)
  • Whether geometry values are of a specific geometry type (e.g. POINT, POLYGON)
  1. Databricks Runtime 17.1 and above must be applied on your Databricks cluster, as ST geospatial functions are only fully supported from this version onwards. For more details, https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-st-geospatial-functions

  2. When defining rules in Getting started → Step 3, you can enable geo validation by adding the parameter "rule_type": "geo" inside your JSON. Example is here

  3. Results of geo validation will be written into the same data_quality schema as generic validation. If a table includes both generic and geo rules, all results will be combined in the output tables.

Profiling

Profiling is the process of analyzing a dataset to understand its structure, patterns, and data quality characteristics (such as completeness, uniqueness, or value distributions).

The profiling functionality in dq_suite generates profiling results and automatically produces a rules.json file, which can be used as input for the validation—making it easier to gain insights and validate data quality.

  1. Run the following command:
pip install dq-suite-amsterdam
    1. Create the data_quality schema (and profiling tables that store profiling results) by running the SQL notebook located here. All it needs is the name of the catalog and the rights to create a schema within that catalog. The catalog allows flexible usage across environments (e.g. dev, test, prod). This step will create the required profiling tables, including:
  • profilingtabel (table-level profiling results)
  • profilingattribuut (attribute-level profiling results)
  1. Get ready to profile your first table. To do so, define
  • df as a Panda dataframe containing the table that needs to be validated (e.g. via pd.read_csv)
  • generate_rules as a Boolean to generate dq_rule_json. Set to False if you only want profiling without rule generation
  • spark as a SparkSession object (in Databricks notebooks, this is by default called spark)
  • dq_rule_json_path as a path to a JSON file, wil be formatted in this way after running profiling function
  • dataset_name as the name of the table for which a data quality check is required. This name will be placed in the JSON file at dq_rule_json_path
  • table_name as the name of the table for which a data quality check is required. This name will be placed in the JSON file at dq_rule_json_path
  • catalog_name as the name of your catalog ('dpxx_dev' or 'dpxx_prd')
  1. Finally, perform the profiling by running
from dq_suite.profile.profile import profile_and_create_rules

profile_and_create_rules(
    df=df,
    dataset_name=dataset_name,
    table_name=table_name,
    catalog_name=catalog_name,
    spark_session=spark,
    generate_rules=True,
    rule_path=dq_rule_json_path
)

Result of profiling

Profiling results are created in an HTML view. The rule.json file is created at the specified path(if generate_rules=True) This file can be edited to refine the rules according to your data validation needs. The JSON rule file can then be used as input for dq_suite validation. Profiling tables are created at the table level and include attributes of each table. Geographic rules, as described in the Geo Validation section, are automatically generated for geometry columns.

For further documentation, see:

Known exceptions / issues

  • The functions can run on Databricks using a Personal Compute Cluster or using a Job Cluster. Using a Shared Compute Cluster will result in an error, as it does not have the permissions that Great Expectations requires.

  • Since this project requires Python >= 3.10, the use of Databricks Runtime (DBR) >= 13.3 is needed (click). Older versions of DBR will result in errors upon install of the dq-suite-amsterdam library.

  • At time of writing (late Aug 2024), Great Expectations v1.0.0 has just been released, and is not (yet) compatible with Python 3.12. Hence, make sure you are using the correct version of Python as interpreter for your project.

  • The run_time value is defined separately from Great Expectations in validation.py. We plan on fixing this when Great Expectations has documented how to access it from the RunIdentifier object.

  • Profiling rules/Rule condition logic

Current profiling-based rule conditions are placeholders and should be defined and validated by the data teams to ensure they are generic and reusable.

  • When using Great Expectations with ResultFormat.COMPLETE, the unexpected_list is limited to a maximum of 200 values per expectation. This is a limitation imposed by Great Expectations.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dq_suite_amsterdam-0.13.5.tar.gz (49.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dq_suite_amsterdam-0.13.5-py3-none-any.whl (38.5 kB view details)

Uploaded Python 3

File details

Details for the file dq_suite_amsterdam-0.13.5.tar.gz.

File metadata

  • Download URL: dq_suite_amsterdam-0.13.5.tar.gz
  • Upload date:
  • Size: 49.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dq_suite_amsterdam-0.13.5.tar.gz
Algorithm Hash digest
SHA256 ba7ea17389f94b5ab3aa57b449773d8bcb05b5adcaf22e9eb6dd0a81a8886639
MD5 fd02a3aca147ac5434abca2188ab6e03
BLAKE2b-256 ab839c5a1dd32287f4f712dff16cf49547f4246d56637902af32f72a7b50c1ea

See more details on using hashes here.

Provenance

The following attestation bundles were made for dq_suite_amsterdam-0.13.5.tar.gz:

Publisher: publish-to-pypi.yml on Amsterdam/dq-suite-amsterdam

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dq_suite_amsterdam-0.13.5-py3-none-any.whl.

File metadata

File hashes

Hashes for dq_suite_amsterdam-0.13.5-py3-none-any.whl
Algorithm Hash digest
SHA256 29b448f40d1a0fe99c2890e9dff81ba49e577dc6bb9e3f71a5c6b6327ad426a0
MD5 d5d4476b10244fb737c003aa3cc26cdb
BLAKE2b-256 0fd1ea814845cd0b5ae3c5b4c2e462d39bb7323eda69dc887da22fc37ac9dd0d

See more details on using hashes here.

Provenance

The following attestation bundles were made for dq_suite_amsterdam-0.13.5-py3-none-any.whl:

Publisher: publish-to-pypi.yml on Amsterdam/dq-suite-amsterdam

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page