Library for ingesting Hive metadata into Google Cloud Data Catalog
Project description
google-datacatalog-hive-connector
Library for ingesting Hive metadata into Google Cloud Data Catalog. You are able to directly connect to your Hive Metastore or Consume message events using Cloud Run.
This connector is prepared to work with the Hive Metastore 2.3.0 version, backed by a PostgreSQL database.
Disclaimer: This is not an officially supported Google product.
Table of Contents
- 1. Environment setup
- 2. Sample application entry point
- 3. Deploy Message Event Consumer on Cloud Run (Optional)
- 4. Tools (Optional)
- 5. Developer environment
- 6. Metrics
- 7. Connector Architecture
- 8. Troubleshooting
1. Installation
Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.
With virtualenv, it's possible to install this library without needing system install permissions, and without clashing with the installed system dependencies. Make sure you use Python 3.7+.
1.1. Mac/Linux
pip3 install virtualenv
virtualenv --python python3.7 <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install google-datacatalog-hive-connector
1.2. Windows
pip3 install virtualenv
virtualenv --python python3.7 <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install google-datacatalog-hive-connector
1.3. Install from source
1.3.1. Get the code
git clone https://github.com/GoogleCloudPlatform/datacatalog-connectors-hive/
cd datacatalog-connectors-hive/google-datacatalog-hive-connector
1.3.2. Create and activate a virtualenv
pip3 install virtualenv
virtualenv --python python3.7 <your-env>
source <your-env>/bin/activate
2. Environment setup
2.1. Auth credentials
2.1.1. Create a service account and grant it below roles
- Data Catalog Admin
2.1.2. Download a JSON key and save it as
<YOUR-CREDENTIALS_FILES_FOLDER>/hive2dc-datacatalog-credentials.json
Please notice this folder and file will be required in next steps.
2.2. Set environment variables to connect to your Hive Metastore
Replace below values according to your environment:
export GOOGLE_APPLICATION_CREDENTIALS=data_catalog_credentials_file
export HIVE2DC_DATACATALOG_PROJECT_ID=google_cloud_project_id
export HIVE2DC_DATACATALOG_LOCATION_ID=us-google_cloud_location_id
export HIVE2DC_HIVE_METASTORE_DB_HOST=hive_metastore_db_server
export HIVE2DC_HIVE_METASTORE_DB_USER=hive_metastore_db_user
export HIVE2DC_HIVE_METASTORE_DB_PASS=hive_metastore_db_pass
export HIVE2DC_HIVE_METASTORE_DB_NAME=hive_metastore_db_name
export HIVE2DC_HIVE_METASTORE_DB_TYPE=hive_metastore_db_type
3. Run entry point
3.1. Run Python entry point
- Virtualenv
google-datacatalog-hive-connector \
--datacatalog-project-id=$HIVE2DC_DATACATALOG_PROJECT_ID \
--datacatalog-location-id=$HIVE2DC_DATACATALOG_LOCATION_ID \
--hive-metastore-db-host=$HIVE2DC_HIVE_METASTORE_DB_HOST \
--hive-metastore-db-user=$HIVE2DC_HIVE_METASTORE_DB_USER \
--hive-metastore-db-pass=$HIVE2DC_HIVE_METASTORE_DB_PASS \
--hive-metastore-db-name=$HIVE2DC_HIVE_METASTORE_DB_NAME \
--hive-metastore-db-type=$HIVE2DC_HIVE_METASTORE_DB_TYPE
3.2. Run Docker entry point
In case you have your Hive metastore DB running in your localhost environment, pass --network="host"
docker build -t hive2datacatalog .
docker run --network="host" --rm --tty -v data:/data hive2datacatalog --datacatalog-project-id=$HIVE2DC_DATACATALOG_PROJECT_ID --datacatalog-location-id=$HIVE2DC_DATACATALOG_LOCATION_ID --hive-metastore-db-host=$HIVE2DC_HIVE_METASTORE_DB_HOST --hive-metastore-db-user=$HIVE2DC_HIVE_METASTORE_DB_USER --hive-metastore-db-pass=$HIVE2DC_HIVE_METASTORE_DB_PASS --hive-metastore-db-name=$HIVE2DC_HIVE_METASTORE_DB_NAME --hive-metastore-db-type=$HIVE2DC_HIVE_METASTORE_DB_TYPE
4. Deploy Message Event Consumer on Cloud Run (Optional)
4.1. Set environment variables to deploy to Cloud Run
Replace below values according to your environment:
export GOOGLE_APPLICATION_CREDENTIALS=data_catalog_credentials_file
export HIVE2DC_DATACATALOG_PROJECT_ID=google_cloud_project_id
export HIVE2DC_DATACATALOG_LOCATION_ID=us-google_cloud_location_id
4.2. Execute the deploy script
source deploy.sh
If the deploy succeeded, you will be presented the Cloud Run endpoint, example: https://hive-sync-example-uc.a.run.app
Save the endpoint which will be needed for the next step.
4.3. Create your Pub/Sub topic and subscription
4.3.1 Set additional environment variables
Replace with your Cloud Run endpoint:
export HIVE2DC_DATACATALOG_TOPIC_ID=google_cloud_topic_id
export HIVE2DC_DATACATALOG_APP_ENDPOINT=https://hive-sync-example-uc.a.run.app
4.3.2 Execute pubsub config script
source tools/create_pub_sub_run_invoker.sh
4.3.4 Send a message to your Pub/Sub topic to test
You can look at valid message events examples on: tools/resources/*.json
4. Tools (Optional)
5.1. Clean up all entries on DataCatalog from the hive entrygroup
run python tools/cleanup_datacatalog.py
5.2. Sample of Hive2Datacatalog Library usage
run python tools/hive2datacatalog_client_sample.py
6. Developer environment
6.1. Install and run Yapf formatter
pip install --upgrade yapf
# Auto update files
yapf --in-place --recursive src tests
# Show diff
yapf --diff --recursive src tests
# Set up pre-commit hook
# From the root of your git project.
curl -o pre-commit.sh https://raw.githubusercontent.com/google/yapf/master/plugins/pre-commit.sh
chmod a+x pre-commit.sh
mv pre-commit.sh .git/hooks/pre-commit
6.2. Install and run Flake8 linter
pip install --upgrade flake8
flake8 src tests
6.3. Run Tests
python setup.py test
7. Metrics
8. Connector Architecture
9. Troubleshooting
In the case a connector execution hits Data Catalog quota limit, an error will be raised and logged with the following detailement, depending on the performed operation READ/WRITE/SEARCH:
status = StatusCode.RESOURCE_EXHAUSTED
details = "Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute' of service 'datacatalog.googleapis.com' for consumer 'project_number:1111111111111'."
debug_error_string =
"{"created":"@1587396969.506556000", "description":"Error received from peer ipv4:172.217.29.42:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute' of service 'datacatalog.googleapis.com' for consumer 'project_number:1111111111111'.","grpc_status":8}"
For more info about Data Catalog quota, go to: Data Catalog quota docs.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file google-datacatalog-hive-connector-0.5.0.tar.gz
.
File metadata
- Download URL: google-datacatalog-hive-connector-0.5.0.tar.gz
- Upload date:
- Size: 15.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.24.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f0e46c70cc131c3d8a75df10e2ee4805be106035e65bef8967eb29b10891bf39 |
|
MD5 | bfa46982829a956f400a95e267f097ae |
|
BLAKE2b-256 | 666b135a86391e2bf5185ceb0cc7de7c6b5c53af4d67c350b55264d839bacc36 |
File details
Details for the file google_datacatalog_hive_connector-0.5.0-py2.py3-none-any.whl
.
File metadata
- Download URL: google_datacatalog_hive_connector-0.5.0-py2.py3-none-any.whl
- Upload date:
- Size: 23.0 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.24.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.46.1 CPython/3.6.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a38de4587a0e5ce7ec06a3e28f751a6f3c8b61db7a1f7dfc0ff37773a343e6d0 |
|
MD5 | a8f68737a57ac65a85a2f3aa549f03ca |
|
BLAKE2b-256 | f262224df26837a9481d98d920ddac3669e05a6fe96fe88e63817ccdea52a90a |