Skip to main content

sqltest: easy testing ETL sqls

Project description

sqltest

pypi python codecov main pre-commit.ci status Downloads License Package Status codestyle Imports: isort

The sqltest framework makes it easy to write test cases for testing complicated ETL processing logic. What you need to do is prepare your source & target dataset with CSV format or Excel format, and also prepare your ETL SQLs.

Installing

Install and update using pip

$ pip install sqltest

An Simple Example

  1. Prepare your ETL SQL file, for example: spark_etl_demo.sql.
  2. Prepare your source dataset and target dataset, refer Dataset preparation check more detail.
  3. Write your test cases follow by the below examples.
    def test_excel_data_source_demo(self):
        environments = {
            "env": "dev",
            "target_data_path": f"{PROJECT_PATH}/tests/data/tables",
        }

        reader = ExcelDatasetReader(
            data_path=f"{PROJECT_PATH}/tests/data/cases/spark_etl_sql_test_excel_demo/spark_etl_demo.xlsx"
        )
        sql_file_path = f"{PROJECT_PATH}/tests/data/cases/spark_etl_sql_test_excel_demo/spark_etl_demo.sql"

        engine = SparkEngine(SPARK, environments)
        engine.run(reader, sql_file_path)
        engine.verify_target_dataset()

    @excel_reader(
        data_path=f"{PROJECT_PATH}/tests/data/cases/spark_etl_sql_test_excel_demo/spark_etl_demo.xlsx"
    )
    @spark_engine(
        spark=SPARK,
        sql_path=f"{PROJECT_PATH}/tests/data/cases/spark_etl_sql_test_excel_demo/spark_etl_demo.sql",
        env={"env": "dev", "target_data_path": f"{PROJECT_PATH}/tests/data/tables"},
    )
    def test_excel_with_decorate(self, reader: DatasetReader, engine: SqlEngine):
        engine.verify_target_dataset()

    @spark_engine(
        spark=SPARK,
        sql_path=f"{PROJECT_PATH}/tests/data/cases/spark_etl_sql_test_excel_demo/spark_etl_demo.sql",
        reader=ExcelDatasetReader(
            f"{PROJECT_PATH}/tests/data/cases/spark_etl_sql_test_excel_demo/spark_etl_demo.xlsx"
        ),
        env={"env": "dev", "target_data_path": f"{PROJECT_PATH}/tests/data/tables"},
    )
    def test_excel_with_engine_decorate(self, engine: SqlEngine):
        engine.verify_target_dataset()
  1. Run you test cases.

Dataset Preparation

Currently, we also support two kinds of dataset reader, and we need to follow specific pattern to prepare source data and target data.

CSV Dataset Reader

  1. There will be a source and a target folder under specific dataset folder, click spark_etl_sql_test_csv_demo to check the example detail.
  2. Under source or target, you can create your source/target datasets defined in ETL SQL file, each dataset stands for a table, and we will used the csv file name as the table name, so please double check if the file name is match with table name in the SQL file.
  3. Read dataset, there are two kinds of use scenarios
  • Creating a reader object to read dataset by CsvDatasetReader(data_path="{dataset_folder}")
  • Using an annotation @csv_reader based on test function
@csv_reader(data_path="{dataset_folder}")
def test_case(reader: DatasetReader):
    pass

Excel Dataset Reader

  1. Different with CSV Dataset Reader, there is only one excel file which will include source datasets and target datasets.
  2. Within the Excel file, each sheet stands for a table, the sheet whose name starts with source-- stands for source dataset/table, target-- stands for target dataset/table. Different with CSV file, we are not use sheet name as the table name, because Excel has length limitation of sheet name, so we store the table name in the first row & first column, click spark_etl_demo.xlsx to get more detail.
  3. There also two kinds of use scenarios to read data
  • Creating a reader object to read dataset by ExcelDatasetReader(data_path="{excel_file_path}")
  • Using an annotation @csv_reader based on test function
@excel_reader(data_path="{excel_file_path}")
def test_case(reader: DatasetReader):
    pass

SQL Engine

Currently, we only support spark engine, we have plan to support other SQL engine, e.g. Flink.

Bugs/Requests

Please use the GitHub issue tracker to submit bugs or request features.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqltest-0.0.13.tar.gz (11.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sqltest-0.0.13-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file sqltest-0.0.13.tar.gz.

File metadata

  • Download URL: sqltest-0.0.13.tar.gz
  • Upload date:
  • Size: 11.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.7.3

File hashes

Hashes for sqltest-0.0.13.tar.gz
Algorithm Hash digest
SHA256 e0ca3f7a0fb84a268f58cfb259ff8361131a7b17a634d7a99ad849932c1d453a
MD5 3ca1c3a961e66467709d5d21e0bbc9bd
BLAKE2b-256 b6b1363e7894ba04beb9fa771ab32dee9197cd4e07342731bf18ec3284912b05

See more details on using hashes here.

File details

Details for the file sqltest-0.0.13-py3-none-any.whl.

File metadata

  • Download URL: sqltest-0.0.13-py3-none-any.whl
  • Upload date:
  • Size: 15.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.7.3

File hashes

Hashes for sqltest-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 430786bd6e427ff80784cc2f0baf7584f3930b035e828c43586bdb912b5d7857
MD5 e9ab1a1c398b725eefd720e8abc92e46
BLAKE2b-256 936ee1a6241358cadba6b4512879922b4271e0b0ff970ff56f6082d94ad687d4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page