Skip to main content

Macrometa target bigquery connector for loading data to BigQuery

Project description

macrometa-target-bigquery

Macrometa target bigquery connector that loads data into BigQuery following the Singer spec.

How to use it

If you want to run this target connector independently please read further.

Install

First, make sure Python 3 is installed on your system or follow these installation instructions for Mac or Ubuntu.

It's recommended to use a virtualenv:

make venv

To run

Like any other target connector that's following the singer specification:

some-singer-source(tap) | macrometa-target-bigquery --config [config.json]

It's reading incoming messages from STDIN and using the properties in config.json to upload data into BigQuery.

Note: To avoid version conflicts run source and targets in separate virtual environments.

Configuration settings

Running the the target connector requires a config.json file. An example with the minimal settings:

{
  "project_id": "mygbqproject"
}

Full list of options in config.json:

Property Type Required? Description
project_id String Yes BigQuery project
location String Region where BigQuery stores your dataset
default_target_schema String Name of the schema where the tables will be created. If schema_mapping is not defined then every stream sent by the tap is loaded into this schema.
default_target_schema_select_permission String Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created
schema_mapping Object Useful if you want to load multiple streams from one source to multiple BigQuery schemas.

If the source sends the stream_id in <schema_name>-<table_name> format then this option overwrites the default_target_schema value. Note, that using schema_mapping you can overwrite the default_target_schema_select_permission value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.
batch_size_rows Integer (Default: 100000) Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into BigQuery.
batch_wait_limit_seconds Integer (Default: None) Maximum time to wait for batch to reach batch_size_rows.
flush_all_streams Boolean (Default: False) Flush and load every stream into BigQuery when one batch is full. Warning: This may trigger transfer of data with low number of records, and may cause performance problems.
parallelism Integer (Default: 0) The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.
max_parallelism Integer (Default: 16) Max number of parallel threads to use when flushing tables.
add_metadata_columns Boolean (Default: False) Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in bigquery etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix _sdc_. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the _sdc_deleted_at metadata column. Without the add_metadata_columns option the deleted rows from sources will not be recognisable in BigQuery.
hard_delete Boolean (Default: False) When hard_delete option is true then DELETE SQL commands will be performed in BigQuery to delete rows in tables. It's achieved by continuously checking the _sdc_deleted_at metadata column sent by the source. Due to deleting rows requires metadata columns, hard_delete option automatically enables the add_metadata_columns option as well.
hard_delete_mapping Object This is useful if you want to set hard_delete for some streams but not others. This should contain a mapping of stream_id: <Boolean>. This boolean will override the default behaviour set with hard_delete for that stream. If a stream is not defined in hard_delete_mapping it will behave according to hard_delete. When hard_delete option is true then DELETE SQL commands will be performed in BigQuery to delete rows in tables. It's achieved by continuously checking the _sdc_deleted_at metadata column sent by the singer source. Due to deleting rows requires metadata columns, hard_delete option automatically enables the add_metadata_columns option as well.
data_flattening_max_level Integer (Default: 0) Object type RECORD items from sources can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically.

When value is 0 (default) then flattening functionality is turned off.
primary_key_required Boolean (Default: True) Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.
validate_records Boolean (Default: False) Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by BigQuery. Enabling this option will detect invalid records earlier but could cause performance degradation.
temp_schema String Name of the schema where the temporary tables will be created. Will default to the same schema as the target tables
use_partition_pruning Boolean (Default: False) If true then BigQuery table partition pruning will be used for tables which have partitioning enabled. This partitioning should be on a column which is immutable such as an integer primary key or a created_at column. The partitioning should be set up manually by the user. This feature can dramatically reduce the cost of each MERGE for large tables.

Schema Changes

This macrometa target connector does follow the PipelineWise specification for schema changes except versioning columns because of the way BigQuery works.

BigQuery does not allow for column renames so a column modification works like this instead:

Versioning columns

Target connectors are versioning columns when data type change is detected in the source table. Versioning columns means that the old column with the old datatype is kept and a new column is created by adding a suffix to the name depending of the type (and also a timestamp for struct and arrays) to the column name with the new data type. This new column will be added to the table.

For example if the data type of COLUMN_THREE changes from INTEGER to VARCHAR PipelineWise will replicate data in this order:

  1. Before changing data type COLUMN_THREE is INTEGER just like in in source table:
COLUMN_ONE COLUMN_TWO COLUMN_THREE (INTEGER)
text text 1
text text 2
text text 3
  1. After the data type change COLUMN_THREE remains INTEGER with the old data and a new COLUMN_TREE__st column created with STRING type that keeps data only after the change.
COLUMN_ONE COLUMN_TWO COLUMN_THREE (INTEGER) COLUMN_THREE__st (VARCHAR)
text text 111
text text 222
text text 333
text text 444-ABC
text text 555-DEF

.. warning::

Please note the NULL values in COLUMN_THREE and COLUMN_THREE__st columns. Historical values are not converted to the new data types! If you need the actual representation of the table after data type changes then you need to resync the table.

Column clustering

This target connector tries to speed up the querying of the resulting tables by clustering the columns in each table by the primary key of the stream.

The choice and ordering of the clustering keys are defined in the same order as the key_properties columns in the stream's SCHEMA messages.

Bigquery places a limit on the number of clustering keys (4 as of 2022-08-02), so if the number of clustering keys is greater than 4, this target will simply use the first 4 columns defined in key_properties property.

To run tests:

  1. Define environment variables that requires running the tests
  export GOOGLE_APPLICATION_CREDENTIALS=<credentials-json-file>
  export MACROMETA_TARGET_BIGQUERY_PROJECT=<bigquery project to run your tests on>
  export MACROMETA_TARGET_BIGQUERY_SCHEMA=<temporary schema for running the tests>
  1. Install python dependencies in a virtual env and run nose unit and integration tests
make venv
  1. To run unit tests:
make unit_test
  1. To run integration tests:
make integration_test

To run pylint:

  1. Install python dependencies and run python linter
make venv pylint

License

Apache License Version 2.0

See LICENSE to see the full text.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

macrometa-target-bigquery-1.0.0.tar.gz (29.2 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file macrometa-target-bigquery-1.0.0.tar.gz.

File metadata

File hashes

Hashes for macrometa-target-bigquery-1.0.0.tar.gz
Algorithm Hash digest
SHA256 73b668d88058ba1d4e51f8f648d73de66baf9b7911974a4a9b81c7b217e7eebd
MD5 54e2ceecd828adc0ff720f1c0d09dcb9
BLAKE2b-256 cac3be2deae4cc2c1e1ba4b3d712de42c9a54bd0421fa333be583008af134ff0

See more details on using hashes here.

File details

Details for the file macrometa_target_bigquery-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for macrometa_target_bigquery-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 eff4b3ffe3b8c8490c83e742fa255eb1f987cda39283b09e634e89a832a7b66f
MD5 992c83ab4ca467eef17e297695204528
BLAKE2b-256 932c5a1fc264e9b0d1f1911b32fa90b2c9c53e515675d0a349324e9e0f64706f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page