Skip to main content

SQLAlchemy dialect for BigQuery

Project description

GA pypi versions

SQLALchemy Dialects

Quick Start

In order to use this library, you first need to go through the following steps:

  1. Select or create a Cloud Platform project.

  2. [Optional] Enable billing for your project.

  3. Enable the BigQuery Storage API.

  4. Setup Authentication.

Installation

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Supported Python Versions

Python >= 3.7

Unsupported Python Versions

Python <= 3.6.

Mac/Linux

pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install sqlalchemy-bigquery

Windows

pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install sqlalchemy-bigquery

Installations when processing large datasets

When handling large datasets, you may see speed increases by also installing the bqstorage dependencies. See the instructions above about creating a virtual environment and then install sqlalchemy-bigquery using the bqstorage extras:

source <your-env>/bin/activate
<your-env>/bin/pip install sqlalchemy-bigquery[bqstorage]

Usage

SQLAlchemy

from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
engine = create_engine('bigquery://project')
table = Table('dataset.table', MetaData(bind=engine), autoload=True)
print(select([func.count('*')], from_obj=table).scalar())

Project

project in bigquery://project is used to instantiate BigQuery client with the specific project ID. To infer project from the environment, use bigquery:// – without project

Authentication

Follow the Google Cloud library guide for authentication.

Alternatively, you can choose either of the following approaches:

  • provide the path to a service account JSON file in create_engine() using the credentials_path parameter:

# provide the path to a service account JSON file
engine = create_engine('bigquery://', credentials_path='/path/to/keyfile.json')
  • pass the credentials in create_engine() as a Python dictionary using the credentials_info parameter:

# provide credentials as a Python dictionary
credentials_info = {
    "type": "service_account",
    "project_id": "your-service-account-project-id"
},
engine = create_engine('bigquery://', credentials_info=credentials_info)

Location

To specify location of your datasets pass location to create_engine():

engine = create_engine('bigquery://project', location="asia-northeast1")

Table names

To query tables from non-default projects or datasets, use the following format for the SQLAlchemy schema name: [project.]dataset, e.g.:

# If neither dataset nor project are the default
sample_table_1 = Table('natality', schema='bigquery-public-data.samples')
# If just dataset is not the default
sample_table_2 = Table('natality', schema='bigquery-public-data')

Batch size

By default, arraysize is set to 5000. arraysize is used to set the batch size for fetching results. To change it, pass arraysize to create_engine():

engine = create_engine('bigquery://project', arraysize=1000)

Page size for dataset.list_tables

By default, list_tables_page_size is set to 1000. list_tables_page_size is used to set the max_results for dataset.list_tables operation. To change it, pass list_tables_page_size to create_engine():

engine = create_engine('bigquery://project', list_tables_page_size=100)

Adding a Default Dataset

If you want to have the Client use a default dataset, specify it as the “database” portion of the connection string.

engine = create_engine('bigquery://project/dataset')

When using a default dataset, don’t include the dataset name in the table name, e.g.:

table = Table('table_name')

Note that specifying a default dataset doesn’t restrict execution of queries to that particular dataset when using raw queries, e.g.:

# Set default dataset to dataset_a
engine = create_engine('bigquery://project/dataset_a')

# This will still execute and return rows from dataset_b
engine.execute('SELECT * FROM dataset_b.table').fetchall()

Connection String Parameters

There are many situations where you can’t call create_engine directly, such as when using tools like Flask SQLAlchemy. For situations like these, or for situations where you want the Client to have a default_query_job_config, you can pass many arguments in the query of the connection string.

The credentials_path, credentials_info, credentials_base64, location, arraysize and list_tables_page_size parameters are used by this library, and the rest are used to create a QueryJobConfig

Note that if you want to use query strings, it will be more reliable if you use three slashes, so 'bigquery:///?a=b' will work reliably, but 'bigquery://?a=b' might be interpreted as having a “database” of ?a=b, depending on the system being used to parse the connection string.

Here are examples of all the supported arguments. Any not present are either for legacy sql (which isn’t supported by this library), or are too complex and are not implemented.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_path=/some/path/to.json' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'list_tables_page_size=100' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

In cases where you wish to include the full credentials in the connection URI you can base64 the credentials JSON file and supply the encoded string to the credentials_base64 parameter.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_base64=eyJrZXkiOiJ2YWx1ZSJ9Cg==' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'list_tables_page_size=100' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

To create the base64 encoded string you can use the command line tool base64, or openssl base64, or python -m base64.

Alternatively, you can use an online generator like www.base64encode.org <https://www.base64encode.org>_ to paste your credentials JSON file to be encoded.

Supplying Your Own BigQuery Client

The above connection string parameters allow you to influence how the BigQuery client used to execute your queries will be instantiated. If you need additional control, you can supply a BigQuery client of your own:

from google.cloud import bigquery

custom_bq_client = bigquery.Client(...)

engine = create_engine(
    'bigquery://some-project/some-dataset?user_supplied_client=True',
    connect_args={'client': custom_bq_client},
)

Creating tables

To add metadata to a table:

table = Table('mytable', ..., bigquery_description='my table description', bigquery_friendly_name='my table friendly name')

To add metadata to a column:

Column('mycolumn', doc='my column description')

Threading and Multiprocessing

Because this client uses the grpc library, it’s safe to share instances across threads.

In multiprocessing scenarios, the best practice is to create client instances after the invocation of os.fork by multiprocessing.pool.Pool or multiprocessing.Process.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlalchemy-bigquery-1.7.0.tar.gz (102.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sqlalchemy_bigquery-1.7.0-py2.py3-none-any.whl (33.4 kB view details)

Uploaded Python 2Python 3

File details

Details for the file sqlalchemy-bigquery-1.7.0.tar.gz.

File metadata

  • Download URL: sqlalchemy-bigquery-1.7.0.tar.gz
  • Upload date:
  • Size: 102.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for sqlalchemy-bigquery-1.7.0.tar.gz
Algorithm Hash digest
SHA256 c6a402a5a4e29744fb22250c508d96951822f53959c7a09cb91f27808895839e
MD5 213714fe4111736eef007c775184c9ae
BLAKE2b-256 845f3ef944ab614efb059aa7656a443e94d973b77a12e52e928c80b68a64445b

See more details on using hashes here.

File details

Details for the file sqlalchemy_bigquery-1.7.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for sqlalchemy_bigquery-1.7.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1d8b407bf599d6d1eb44a2740f08199d2f35434ade345a1d7b627222a1756285
MD5 1b34308fc4a569c43e41c751afc735fa
BLAKE2b-256 9b07c0368c6f13ac2d023d63716b67cfc15f243fc0ec9ee528724fddc2e3820f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page