Skip to main content

Easily integrate data in BigQuery

Project description

PyGBQ

Easily integrate data in BigQuery

Example

from pygbq import Client
import requests

client = Client()
token = client.get_secret('secret_name')
headers = {'Authorization': f'Bearer {token}'}
url = ...
data = requests.get(url, headers=headers).json()
response = client.update(data=data, table_id='mydataset.mytable', how=['id'])

This snippets gets some data from an url and (merges)[https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax#merge_statement] ((upserts)[https://en.wikipedia.org/wiki/Merge_(SQL)]) on id column it to the table mytable in the dataset mydataset.

Install and set up

pip install pygbq

Set up the authentication.

How it works

how=['column1', 'column2', ...]

PyGBQ generates one or many temporary tables that are merged into the target table. During the merge all the columns of the target table are updated. Here's how it looks like:

  1. Split data into batches.
  2. For every batch create mydataset.mytable_tmp_SOMERANDOMPOSTFIX, put it inside and run
MERGE myproject.mydataset.mytable T
USING myproject.mydataset.mytable_tmp_SOMERANDOMPOSTFIX S
ON T.column1 = S.column1 AND T.column2 = S.column2
WHEN NOT MATCHED THEN
	INSERT ROW
WHEN MATCHED THEN
	UPDATE SET
column1 = S.column1,
column2 = S.column2,
column3 = S.column3,
column4 = S.column4
...

how='replace'

  1. Creates a table mydataset.mytable with schema automatically generated by bigquery-schema-generator.
  2. Splits data into batches and inserts it to mydataset.mytable.

how='fail'

Identical to how='replace' except that it fails if mydataset.mytable exists.

how='insert'

Splits data into batches and inserts (appends) it to mydataset.mytable.

For more details look at Documentation section.

Documentation

Here's the documentation with default parameters.

Client

init

from pygbq import Client
client = Client(default_dataset=None, path_to_key=None)

Initalizes a client. You can specify:

  • default_dataset - (str) default dataset that the client will be using to reference tables
  • path_to_key - (str) By default PyGQB uses from google.auth import default to get credentials, but you can specify this parameter if you wish to use from google.auth import load_credentials_from_file instead.

update

client.update(data, table_id, how, schema: Union[str, List[dict]] = None, expiration=1, max_insert_num_rows=4000)

Updates table.

  • data - list of dict
  • table_id - (str) Table id, could have one of the following forms:
    • table_name if default_dataset is set
    • dataset_name.table_name
    • project_id.dataset_name.table_name
  • how - (str or List[dict]) Look at How it works section
  • expiration - (float) temporary tables expiration time in hours
  • max_insert_num_rows - (int) how many rows per temporary table is inserted

query

client.query(query)

Execute a query in BigQuery.

  • query - (str) BigQuery query

get_secret

client.get_secret(self, secret_id, version="latest")

Get a secret stored in Secret Manager.

  • secret_id - (str) Secret name
  • version - Secret version

add_secret

client.get_secret(self, secret_id, version="latest")

Adds a new secret version in Secret Manager.

  • secret_id - (str) Secret name
  • data - (str) Secret value

read_jsonl

from pygbq import read_jsonl
read_jsonl(name: str = "data.jsonl")

Reads a new line delimited json.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pygbq-0.26.tar.gz (11.8 kB view hashes)

Uploaded Source

Built Distribution

pygbq-0.26-py3-none-any.whl (11.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page