Library for ingesting SQLServer metadata into Google Cloud Data Catalog
Project description
google-datacatalog-sqlserver-connector
Library for ingesting SQLServer metadata into Google Cloud Data Catalog. Currently supports SQL Server 2017 Standard.
Disclaimer: This is not an officially supported Google product.
Table of Contents
- 1. Installation
- 2. Environment setup
- 3. Adapt user configurations
- 4. Run entry point
- 5 Scripts inside tools
- 6. Developer environment
- 7. Metrics
- 8. Troubleshooting
1. Installation
Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.
With virtualenv, it's possible to install this library without needing system install permissions, and without clashing with the installed system dependencies. Make sure you use Python 3.6+.
1.1. Mac/Linux
pip3 install virtualenv
virtualenv --python python3.6 <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-datacatalog-sqlserver-connector
1.2. Windows
pip3 install virtualenv
virtualenv --python python3.6 <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-datacatalog-sqlserver-connector
1.3. Install from source
1.3.1. Get the code
git clone https://github.com/GoogleCloudPlatform/datacatalog-connectors-rdbms/
cd datacatalog-connectors-rdbms/google-datacatalog-sqlserver-connector
1.3.2. Create and activate a virtualenv
pip3 install virtualenv
virtualenv --python python3.6 <your-env>
source <your-env>/bin/activate
1.3.3. Install the library
pip install .
2. Environment setup
2.1. Auth credentials
2.1.1. Create a service account and grant it below roles
- Data Catalog Admin
2.1.2. Download a JSON key and save it as
<YOUR-CREDENTIALS_FILES_FOLDER>/sqlserver2dc-credentials.json
Please notice this folder and file will be required in next steps.
2.2 Set up SQL Server Driver (Optional)
This is step is needed when you are running the connector on a machine that does not have the SQLServer installation.
2.3. Set environment variables
Replace below values according to your environment:
export GOOGLE_APPLICATION_CREDENTIALS=data_catalog_credentials_file
export SQLSERVER2DC_DATACATALOG_PROJECT_ID=google_cloud_project_id
export SQLSERVER2DC_DATACATALOG_LOCATION_ID=google_cloud_location_id
export SQLSERVER2DC_SQLSERVER_SERVER=sqlserver_server
export SQLSERVER2DC_SQLSERVER_USERNAME=sqlserver_username
export SQLSERVER2DC_SQLSERVER_PASSWORD=sqlserver_password
export SQLSERVER2DC_SQLSERVER_DATABASE=sqlserver_database
export SQLSERVER2DC_RAW_METADATA_CSV=sqlserver_raw_csv (If supplied ignores the SQLSERVER server credentials)
3. Adapt user configurations
Along with default metadata, the connector can enrich metadata with user provided values as well, such as adding a prefix to each schema and tables name.
The table below shows what metadata is scraped by default, and what is configurable.
Metadata | Description | Scraped by default | Config option |
---|---|---|---|
schema_name | Name of the Schema | Y | --- |
table_name | Name of a table | Y | --- |
table_type | Type of a table (BASE, VIEW, etc) | Y | --- |
column_name | Name of a column | Y | --- |
column_type | Column data type | Y | --- |
column_default_value | Default value of a column | Y | --- |
column_nullable | Whether a column is nullable | Y | --- |
column_char_length | Char length of values in a column | Y | --- |
column_numeric_precision | Numeric precision of values in a column | Y | --- |
prefix | Prefix to be added to schema and tables name | N/A | enrich_metadata.entry_prefix |
entry_id_pattern_for_prefix | Entry ID pattern which the prefix will be applied | N/A | enrich_metadata.entry_id_pattern_for_prefix |
prefix
should comply with Data Catalog entryId
:
The ID must begin with a letter or underscore, contain only English letters, numbers and underscores, and have at most 64 characters (combined the prefix + the entryId).
if the entry_id_pattern_for_prefix
is supplied, the prefix will only be applied to this pattern.
Sample configuration file ingest_cfg.yaml in the repository root shows what kind of configuration is expected.
If you want to enable the user defined config, please add ingest_cfg.yaml to the directory from which you execute the connector and adapt it to your needs.
4. Run entry point
4.1. Run Python entry point
- Virtualenv
google-datacatalog-sqlserver-connector \
--datacatalog-project-id=$SQLSERVER2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$SQLSERVER2DC_DATACATALOG_LOCATION_ID \
--sqlserver-host=$SQLSERVER2DC_SQLSERVER_SERVER \
--sqlserver-user=$SQLSERVER2DC_SQLSERVER_USERNAME \
--sqlserver-pass=$SQLSERVER2DC_SQLSERVER_PASSWORD \
--sqlserver-database=$SQLSERVER2DC_SQLSERVER_DATABASE \
--raw-metadata-csv=$SQLSERVER2DC_RAW_METADATA_CSV
4.2. Run the Python entry point with a user-defined entry resource URL prefix
This option is useful when the connector cannot accurately determine the database hostname. For example when running under proxies, load balancers or database read replicas, you can specify the prefix of your master instance so the resource URL will point to the exact database where the data is stored.
- Virtualenv
google-datacatalog-sqlserver-connector \
--datacatalog-project-id=$SQLSERVER2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$SQLSERVER2DC_DATACATALOG_LOCATION_ID \
--datacatalog-entry-resource-url-prefix project/database-instance \
--sqlserver-host=$SQLSERVER2DC_SQLSERVER_SERVER \
--sqlserver-user=$SQLSERVER2DC_SQLSERVER_USERNAME \
--sqlserver-pass=$SQLSERVER2DC_SQLSERVER_PASSWORD \
--sqlserver-database=$SQLSERVER2DC_SQLSERVER_DATABASE \
--raw-metadata-csv=$SQLSERVER2DC_RAW_METADATA_CSV
4.3. Run Docker entry point
docker build -t sqlserver2datacatalog .
docker run --rm --tty -v YOUR-CREDENTIALS_FILES_FOLDER:/data sqlserver2datacatalog \
--datacatalog-project-id=$SQLSERVER2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$SQLSERVER2DC_DATACATALOG_LOCATION_ID \
--sqlserver-host=$SQLSERVER2DC_SQLSERVER_SERVER \
--sqlserver-user=$SQLSERVER2DC_SQLSERVER_USERNAME \
--sqlserver-pass=$SQLSERVER2DC_SQLSERVER_PASSWORD \
--sqlserver-database=$SQLSERVER2DC_SQLSERVER_DATABASE \
--raw-metadata-csv=$SQLSERVER2DC_RAW_METADATA_CSV
5 Scripts inside tools
5.1. Run clean up
# List of projects split by comma. Can be a single value without comma
export SQLSERVER2DC_DATACATALOG_PROJECT_IDS=my-project-1,my-project-2
# Run the clean up
python tools/cleanup_datacatalog.py --datacatalog-project-ids=$SQLSERVER2DC_DATACATALOG_PROJECT_IDS
6. Developer environment
6.1. Install and run Yapf formatter
pip install --upgrade yapf
# Auto update files
yapf --in-place --recursive src tests
# Show diff
yapf --diff --recursive src tests
# Set up pre-commit hook
# From the root of your git project.
curl -o pre-commit.sh https://raw.githubusercontent.com/google/yapf/master/plugins/pre-commit.sh
chmod a+x pre-commit.sh
mv pre-commit.sh .git/hooks/pre-commit
6.2. Install and run Flake8 linter
pip install --upgrade flake8
flake8 src tests
6.3. Run Tests
python setup.py test
7. Metrics
8. Troubleshooting
In the case a connector execution hits Data Catalog quota limit, an error will be raised and logged with the following detailement, depending on the performed operation READ/WRITE/SEARCH:
status = StatusCode.RESOURCE_EXHAUSTED
details = "Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute' of service 'datacatalog.googleapis.com' for consumer 'project_number:1111111111111'."
debug_error_string =
"{"created":"@1587396969.506556000", "description":"Error received from peer ipv4:172.217.29.42:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute' of service 'datacatalog.googleapis.com' for consumer 'project_number:1111111111111'.","grpc_status":8}"
For more info about Data Catalog quota, go to: Data Catalog quota docs.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file google-datacatalog-sqlserver-connector-0.10.0.tar.gz
.
File metadata
- Download URL: google-datacatalog-sqlserver-connector-0.10.0.tar.gz
- Upload date:
- Size: 13.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b786f3715009c36220a163edc10d551f41915fcf5f0c38a4ec288b574c4d5052 |
|
MD5 | ba4c7d95ce68e87efe92d18a622bb379 |
|
BLAKE2b-256 | 093b43b7d581b3a9e491bb17f0cf9a22fb9bb67af0eb49ea3d3d736551f42de7 |
File details
Details for the file google_datacatalog_sqlserver_connector-0.10.0-py2.py3-none-any.whl
.
File metadata
- Download URL: google_datacatalog_sqlserver_connector-0.10.0-py2.py3-none-any.whl
- Upload date:
- Size: 14.7 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3d152f39a607cb7cd5f39a937f20ee30363d3e3b612c831dee85258ba328b068 |
|
MD5 | 0caf6cf57fb1c4fdef0795adb1c580a2 |
|
BLAKE2b-256 | 98e5673fe2821b7e5bb72efd53ee4becd9cb8611348f8f07ffb3451e8899e812 |