Skip to main content

Library for ingesting Postgresql metadata into Google Cloud Data Catalog

Project description

google-datacatalog-postgresql-connector

Library for ingesting PostgreSQL metadata into Google Cloud Data Catalog.

Python package PyPi License Issues

Disclaimer: This is not an officially supported Google product.

Table of Contents


1. Installation

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it's possible to install this library without needing system install permissions, and without clashing with the installed system dependencies. Make sure you use Python 3.6+.

1.1. Mac/Linux

pip3 install virtualenv
virtualenv --python python3.6 <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-datacatalog-postgresql-connector

1.2. Windows

pip3 install virtualenv
virtualenv --python python3.6 <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-datacatalog-postgresql-connector

1.3. Install from source

1.3.1. Get the code

git clone https://github.com/GoogleCloudPlatform/datacatalog-connectors-rdbms/
cd datacatalog-connectors-rdbms/google-datacatalog-postgresql-connector

1.3.2. Create and activate a virtualenv

pip3 install virtualenv
virtualenv --python python3.6 <your-env>
source <your-env>/bin/activate

1.3.3. Install the library

pip install .

2. Environment setup

2.1. Auth credentials

2.1.1. Create a service account and grant it below roles

  • Data Catalog Admin

2.1.2. Download a JSON key and save it as

  • <YOUR-CREDENTIALS_FILES_FOLDER>/postgresql2dc-credentials.json

Please notice this folder and file will be required in next steps.

2.2. Set environment variables

Replace below values according to your environment:

export GOOGLE_APPLICATION_CREDENTIALS=data_catalog_credentials_file

export POSTGRESQL2DC_DATACATALOG_PROJECT_ID=google_cloud_project_id
export POSTGRESQL2DC_DATACATALOG_LOCATION_ID=google_cloud_location_id
export POSTGRESQL2DC_POSTGRESQL_SERVER=postgresql_server
export POSTGRESQL2DC_POSTGRESQL_USERNAME=postgresql_username
export POSTGRESQL2DC_POSTGRESQL_PASSWORD=postgresql_password
export POSTGRESQL2DC_POSTGRESQL_DATABASE=postgresql_database
export POSTGRESQL2DC_RAW_METADATA_CSV=postgresql_raw_csv (If supplied ignores the POSTGRESQL server credentials)

3. Adapt user configurations

Along with default metadata, the connector can ingest optional metadata as well, such as number of rows in each table. The table below shows what metadata is scraped by default, and what is configurable.

Metadata Description Scraped by default Config option
schema_name Name of a schema Y ---
table_name Name of a table Y ---
table_type Type of a table (BASE, VIEW, etc) Y ---
table_size_mb Size of a table, in MB Y ---
column_name Name of a column Y ---
column_type Type of a column (ARRAY, USER-DEFINED, etc) Y ---
column_default_value Default value of a column Y ---
column_nullable Whether a column is nullable Y ---
column_char_length Char length of values in a column Y ---
column_numeric_precision Numeric precision of values in a column Y ---
column_enum_values List of enum values for a column Y ---
ANALYZE statement Statement to refresh metadata information N refresh_metadata_tables
table_rows Number of rows in a table N sync_row_counts
base_metadata_query_filename Overrides the base metadata query file name N/A base_metadata_query_filename

Sample configuration file ingest_cfg.yaml in the repository root shows what kind of configuration is expected. If you want to run optional queries, please add ingest_cfg.yaml to your working directory and adapt it to your needs.

4. Run entry point

4.1. Run Python entry point

  • Virtualenv
google-datacatalog-postgresql-connector \
--datacatalog-project-id=$POSTGRESQL2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$POSTGRESQL2DC_DATACATALOG_LOCATION_ID \
--postgresql-host=$POSTGRESQL2DC_POSTGRESQL_SERVER \
--postgresql-user=$POSTGRESQL2DC_POSTGRESQL_USERNAME \
--postgresql-pass=$POSTGRESQL2DC_POSTGRESQL_PASSWORD \
--postgresql-database=$POSTGRESQL2DC_POSTGRESQL_DATABASE  \
--raw-metadata-csv=$POSTGRESQL2DC_RAW_METADATA_CSV

4.2. Run the Python entry point with a user-defined entry resource URL prefix

This option is useful when the connector cannot accurately determine the database hostname. For example when running under proxies, load balancers or database read replicas, you can specify the prefix of your master instance so the resource URL will point to the exact database where the data is stored.

  • Virtualenv
google-datacatalog-postgresql-connector \
--datacatalog-project-id=$POSTGRESQL2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$POSTGRESQL2DC_DATACATALOG_LOCATION_ID \
--datacatalog-entry-resource-url-prefix project/database-instance \
--postgresql-host=$POSTGRESQL2DC_POSTGRESQL_SERVER \
--postgresql-user=$POSTGRESQL2DC_POSTGRESQL_USERNAME \
--postgresql-pass=$POSTGRESQL2DC_POSTGRESQL_PASSWORD \
--postgresql-database=$POSTGRESQL2DC_POSTGRESQL_DATABASE  \
--raw-metadata-csv=$POSTGRESQL2DC_RAW_METADATA_CSV

4.3. Run Docker entry point

docker build -t postgresql2datacatalog .
docker run --rm --tty -v YOUR-CREDENTIALS_FILES_FOLDER:/data postgresql2datacatalog \
--datacatalog-project-id=$POSTGRESQL2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$POSTGRESQL2DC_DATACATALOG_LOCATION_ID \
--postgresql-host=$POSTGRESQL2DC_POSTGRESQL_SERVER \
--postgresql-user=$POSTGRESQL2DC_POSTGRESQL_USERNAME \
--postgresql-pass=$POSTGRESQL2DC_POSTGRESQL_PASSWORD \
--postgresql-database=$POSTGRESQL2DC_POSTGRESQL_DATABASE  \
--raw-metadata-csv=$POSTGRESQL2DC_RAW_METADATA_CSV       

5. Scripts inside tools

5.1. Run clean up

# List of projects split by comma. Can be a single value without comma
export POSTGRESQL2DC_DATACATALOG_PROJECT_IDS=my-project-1,my-project-2
# Run the clean up
python tools/cleanup_datacatalog.py --datacatalog-project-ids=$POSTGRESQL2DC_DATACATALOG_PROJECT_IDS 

5.2. Extract CSV

# Run  inside your postgresql database instance

COPY (
    select t.table_schema as schema_name, t.table_name, t.table_type, c.column_name, c.column_default as column_default_value, c.is_nullable as column_nullable, c.data_type as column_type,
            c.character_maximum_length as column_char_length, c.numeric_precision as column_numeric_precision  
        from information_schema.tables t
            join  information_schema.columns c on c.table_name = t.table_name
        where t.table_schema not in ('pg_catalog', 'information_schema', 'pg_toast', 'gp_toolkit')
            and c.table_schema not in ('pg_catalog', 'information_schema', 'pg_toast', 'gp_toolkit')
    ) 
    TO '/home/postgre/postgresql_full_dump.csv'  CSV HEADER;

6. Developer environment

6.1. Install and run Yapf formatter

pip install --upgrade yapf

# Auto update files
yapf --in-place --recursive src tests

# Show diff
yapf --diff --recursive src tests

# Set up pre-commit hook
# From the root of your git project.
curl -o pre-commit.sh https://raw.githubusercontent.com/google/yapf/master/plugins/pre-commit.sh
chmod a+x pre-commit.sh
mv pre-commit.sh .git/hooks/pre-commit

6.2. Install and run Flake8 linter

pip install --upgrade flake8
flake8 src tests

6.3. Run Tests

python setup.py test

7. Metrics

Metrics README.md

8. Troubleshooting

In the case a connector execution hits Data Catalog quota limit, an error will be raised and logged with the following detailement, depending on the performed operation READ/WRITE/SEARCH:

status = StatusCode.RESOURCE_EXHAUSTED
details = "Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute' of service 'datacatalog.googleapis.com' for consumer 'project_number:1111111111111'."
debug_error_string = 
"{"created":"@1587396969.506556000", "description":"Error received from peer ipv4:172.217.29.42:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute' of service 'datacatalog.googleapis.com' for consumer 'project_number:1111111111111'.","grpc_status":8}"

For more info about Data Catalog quota, go to: Data Catalog quota docs.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file google-datacatalog-postgresql-connector-0.10.0.tar.gz.

File metadata

  • Download URL: google-datacatalog-postgresql-connector-0.10.0.tar.gz
  • Upload date:
  • Size: 13.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.7.3 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.8.8

File hashes

Hashes for google-datacatalog-postgresql-connector-0.10.0.tar.gz
Algorithm Hash digest
SHA256 264ff0b2e4213b7be34d5f98cd57b80e39a39f913b4f2b7cf685a04f5cc6231e
MD5 8d4df315751e4b26d850a7a21f656657
BLAKE2b-256 72d5dfc92530323fb713450bed5dcc6de43f35e334d8b2098c9e65c1801ba029

See more details on using hashes here.

File details

Details for the file google_datacatalog_postgresql_connector-0.10.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for google_datacatalog_postgresql_connector-0.10.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 927b3a62433f7cbedd17b47430eec6f72bae0b4eb4984d9c9ad4433df038cc9f
MD5 0de5a90ad1507d2ed2ecd468e0b61a76
BLAKE2b-256 ae6a60da7191e0d823821ffd37d9d896a98acb10236022cbc96b8d0ae7b52141

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page