Skip to main content

Wrapper for Great Expectations to fit the requirements of the Gemeente Amsterdam.

Project description

About dq-suite-amsterdam

This repository aims to be an easy-to-use wrapper for the data quality library Great Expectations (GX). All that is needed to get started is an in-memory Spark dataframe and a set of data quality rules - specified in a JSON file of particular formatting.

By default, all the validation results are written to Unity Catalog. Alternatively, one could disallow writing to a data_quality schema in UC, which one has to create once per catalog via this notebook. Additionally, users can choose to get notified via Slack or Microsoft Teams.

DISCLAIMER: The package is in MVP phase, so watch your step.

How to contribute

Want to help out? Great! Feel free to create a pull request addressing one of the open issues. Some notes for developers are located here.

Found a bug, or need a new feature? Add a new issue describing what you need.

Getting started

Following GX, we recommend installing dq-suite-amsterdam in a virtual environment. This could be either locally via your IDE, on your compute via a notebook in Databricks, or as part of a workflow.

  1. Run the following command:
pip install dq-suite-amsterdam
  1. Create the data_quality schema (and tables all results will be written to) by running the SQL notebook located here. All it needs is the name of the catalog - and the rights to create a schema within that catalog :)

  2. Get ready to validate your first table. To do so, define

  • dq_rule_json_path as a path to a JSON file, formatted in this way
  • df as a Spark dataframe containing the table that needs to be validated (e.g. via spark.read.csv or spark.read.table)
  • spark as a SparkSession object (in Databricks notebooks, this is by default called spark)
  • catalog_name as the name of your catalog ('dpxx_dev' or 'dpxx_prd')
  • table_name as the name of the table for which a data quality check is required. This name should also occur in the JSON file at dq_rule_json_path
  1. Finally, perform the validation by running (note: the library is imported as dq_suite, not as dq_suite_amsterdam!)
from dq_suite.validation import run_validation

run_validation(
    json_path=dq_rule_json_path,
    df=df, 
    spark_session=spark,
    catalog_name=catalog_name,
    table_name=table_name,
)

Note: run_validation now returns a tuple as (validation_result, highest_severity_level):

validation_result → Boolean flag indicating overall success (True if all checks pass, False otherwise).

highest_severity_level → String indicating the highest severity among failed checks (one of 'fatal', 'error', 'warning', or 'ok').

See the documentation of dq_suite.validation.run_validation for what other parameters can be passed.

Geo Validation

Geo validation enables geometric checks using Databricks ST geospatial functions. It is fully integrated into the existing validation flow, allowing generic and geo rules to be applied together on the same table.

Geo validation can be used to validate, among others:

  • Whether geometry values are present and non-empty
  • Whether geometries are structurally valid (e.g. no invalid polygons)
  • Whether geometry values are of a specific geometry type (e.g. POINT, POLYGON)
  1. Databricks Runtime 17.1 and above must be applied on your Databricks cluster, as ST geospatial functions are only fully supported from this version onwards. For more details, https://learn.microsoft.com/en-us/azure/databricks/sql/language-manual/sql-ref-st-geospatial-functions

  2. When defining rules in Getting started → Step 3, you can enable geo validation by adding the parameter "rule_type": "geo" inside your JSON. Example is here

  3. Results of geo validation will be written into the same data_quality schema as generic validation. If a table includes both generic and geo rules, all results will be combined in the output tables.

Profiling

Profiling is the process of analyzing a dataset to understand its structure, patterns, and data quality characteristics (such as completeness, uniqueness, or value distributions).

The profiling functionality in dq_suite generates profiling results and automatically produces a rules.json file, which can be used as input for the validation—making it easier to gain insights and validate data quality.

  1. Run the following command:
pip install dq-suite-amsterdam
  1. Get ready to profile your first table. To do so, define
  • df as a Panda dataframe containing the table that needs to be validated (e.g. via pd.read_csv)
  • generate_rules as a Boolean to generate dq_rule_json. Set to False if you only want profiling without rule generation
  • spark as a SparkSession object (in Databricks notebooks, this is by default called spark)
  • dq_rule_json_path as a path to a JSON file, wil be formatted in this way after running profiling function
  • dataset_name as the name of the table for which a data quality check is required. This name will be placed in the JSON file at dq_rule_json_path
  • table_name as the name of the table for which a data quality check is required. This name will be placed in the JSON file at dq_rule_json_path
  1. Finally, perform the profiling by running
from dq_suite.profile.profile import profile_and_create_rules

profile_and_create_rules(
    df=df,
    dataset_name=dataset_name,
    table_name=table_name,
    spark_session=spark,
    generate_rules=True,
    rule_path=dq_rule_json_path
)

Result of profiling

Profiling results are created in an HTML view. The rule.json file is created at the specified path(if generate_rules=True) This file can be edited to refine the rules according to your data validation needs. The JSON rule file can then be used as input for dq_suite validation. Profiling tables are created at the table level and include attributes of each table. Geographic rules, as described in the Geo Validation section, are automatically generated for geometry columns.

For further documentation, see:

Known exceptions / issues

  • The functions can run on Databricks using a Personal Compute Cluster or using a Job Cluster. Using a Shared Compute Cluster will result in an error, as it does not have the permissions that Great Expectations requires.

  • Since this project requires Python >= 3.10, the use of Databricks Runtime (DBR) >= 13.3 is needed (click). Older versions of DBR will result in errors upon install of the dq-suite-amsterdam library.

  • At time of writing (late Aug 2024), Great Expectations v1.0.0 has just been released, and is not (yet) compatible with Python 3.12. Hence, make sure you are using the correct version of Python as interpreter for your project.

  • The run_time value is defined separately from Great Expectations in validation.py. We plan on fixing this when Great Expectations has documented how to access it from the RunIdentifier object.

  • Profiling rules/Rule condition logic

Current profiling-based rule conditions are placeholders and should be defined and validated by the data teams to ensure they are generic and reusable.

  • When using Great Expectations with ResultFormat.COMPLETE, the unexpected_list is limited to a maximum of 200 values per expectation. This is a limitation imposed by Great Expectations.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dq_suite_amsterdam-0.13.4.tar.gz (49.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dq_suite_amsterdam-0.13.4-py3-none-any.whl (38.4 kB view details)

Uploaded Python 3

File details

Details for the file dq_suite_amsterdam-0.13.4.tar.gz.

File metadata

  • Download URL: dq_suite_amsterdam-0.13.4.tar.gz
  • Upload date:
  • Size: 49.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for dq_suite_amsterdam-0.13.4.tar.gz
Algorithm Hash digest
SHA256 04b6d5825d22fe8ef14d48d9a7eded4b62ce04b0ae237bfb156a1a54480fc8b9
MD5 a7a9ce4afa1efa0cca8d9f2612673c1a
BLAKE2b-256 a07700a9ba1d179a92e7c498ff4d479599b7996cc30638a109cede2b48748bdd

See more details on using hashes here.

Provenance

The following attestation bundles were made for dq_suite_amsterdam-0.13.4.tar.gz:

Publisher: publish-to-pypi.yml on Amsterdam/dq-suite-amsterdam

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file dq_suite_amsterdam-0.13.4-py3-none-any.whl.

File metadata

File hashes

Hashes for dq_suite_amsterdam-0.13.4-py3-none-any.whl
Algorithm Hash digest
SHA256 3af0648f456e81c7e1b309c8adf33f71f40975a6a87dc69a8999ea1f1d1734a8
MD5 cc87c077b7f2bcced5da5d217b5e1d6b
BLAKE2b-256 0e23c72a03aa16b0b9d085e0471984e536b71e00b50e21acfd41d39a84f90207

See more details on using hashes here.

Provenance

The following attestation bundles were made for dq_suite_amsterdam-0.13.4-py3-none-any.whl:

Publisher: publish-to-pypi.yml on Amsterdam/dq-suite-amsterdam

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page