Skip to main content

SQLAlchemy dialect for BigQuery

Project description

GA pypi versions

SQLALchemy Dialects

Quick Start

In order to use this library, you first need to go through the following steps:

  1. Select or create a Cloud Platform project.

  2. [Optional] Enable billing for your project.

  3. Enable the BigQuery Storage API.

  4. Setup Authentication.

Installation

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Supported Python Versions

Python >= 3.8

Unsupported Python Versions

Python <= 3.7.

Mac/Linux

pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install sqlalchemy-bigquery

Windows

pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install sqlalchemy-bigquery

Installations when processing large datasets

When handling large datasets, you may see speed increases by also installing the bqstorage dependencies. See the instructions above about creating a virtual environment and then install sqlalchemy-bigquery using the bqstorage extras:

source <your-env>/bin/activate
<your-env>/bin/pip install sqlalchemy-bigquery[bqstorage]

Usage

SQLAlchemy

from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
engine = create_engine('bigquery://project')
table = Table('dataset.table', MetaData(bind=engine), autoload=True)
print(select([func.count('*')], from_obj=table().scalar()))

Project

project in bigquery://project is used to instantiate BigQuery client with the specific project ID. To infer project from the environment, use bigquery:// – without project

Authentication

Follow the Google Cloud library guide for authentication.

Alternatively, you can choose either of the following approaches:

  • provide the path to a service account JSON file in create_engine() using the credentials_path parameter:

# provide the path to a service account JSON file
engine = create_engine('bigquery://', credentials_path='/path/to/keyfile.json')
  • pass the credentials in create_engine() as a Python dictionary using the credentials_info parameter:

# provide credentials as a Python dictionary
credentials_info = {
    "type": "service_account",
    "project_id": "your-service-account-project-id"
}
engine = create_engine('bigquery://', credentials_info=credentials_info)

Location

To specify location of your datasets pass location to create_engine():

engine = create_engine('bigquery://project', location="asia-northeast1")

Table names

To query tables from non-default projects or datasets, use the following format for the SQLAlchemy schema name: [project.]dataset, e.g.:

# If neither dataset nor project are the default
sample_table_1 = Table('natality', schema='bigquery-public-data.samples')
# If just dataset is not the default
sample_table_2 = Table('natality', schema='bigquery-public-data')

Batch size

By default, arraysize is set to 5000. arraysize is used to set the batch size for fetching results. To change it, pass arraysize to create_engine():

engine = create_engine('bigquery://project', arraysize=1000)

Page size for dataset.list_tables

By default, list_tables_page_size is set to 1000. list_tables_page_size is used to set the max_results for dataset.list_tables operation. To change it, pass list_tables_page_size to create_engine():

engine = create_engine('bigquery://project', list_tables_page_size=100)

Adding a Default Dataset

If you want to have the Client use a default dataset, specify it as the “database” portion of the connection string.

engine = create_engine('bigquery://project/dataset')

When using a default dataset, don’t include the dataset name in the table name, e.g.:

table = Table('table_name')

Note that specifying a default dataset doesn’t restrict execution of queries to that particular dataset when using raw queries, e.g.:

# Set default dataset to dataset_a
engine = create_engine('bigquery://project/dataset_a')

# This will still execute and return rows from dataset_b
engine.execute('SELECT * FROM dataset_b.table').fetchall()

Connection String Parameters

There are many situations where you can’t call create_engine directly, such as when using tools like Flask SQLAlchemy. For situations like these, or for situations where you want the Client to have a default_query_job_config, you can pass many arguments in the query of the connection string.

The credentials_path, credentials_info, credentials_base64, location, arraysize and list_tables_page_size parameters are used by this library, and the rest are used to create a QueryJobConfig

Note that if you want to use query strings, it will be more reliable if you use three slashes, so 'bigquery:///?a=b' will work reliably, but 'bigquery://?a=b' might be interpreted as having a “database” of ?a=b, depending on the system being used to parse the connection string.

Here are examples of all the supported arguments. Any not present are either for legacy sql (which isn’t supported by this library), or are too complex and are not implemented.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_path=/some/path/to.json' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'list_tables_page_size=100' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

In cases where you wish to include the full credentials in the connection URI you can base64 the credentials JSON file and supply the encoded string to the credentials_base64 parameter.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_base64=eyJrZXkiOiJ2YWx1ZSJ9Cg==' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'list_tables_page_size=100' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

To create the base64 encoded string you can use the command line tool base64, or openssl base64, or python -m base64.

Alternatively, you can use an online generator like www.base64encode.org <https://www.base64encode.org>_ to paste your credentials JSON file to be encoded.

Supplying Your Own BigQuery Client

The above connection string parameters allow you to influence how the BigQuery client used to execute your queries will be instantiated. If you need additional control, you can supply a BigQuery client of your own:

from google.cloud import bigquery

custom_bq_client = bigquery.Client(...)

engine = create_engine(
    'bigquery://some-project/some-dataset?user_supplied_client=True',
        connect_args={'client': custom_bq_client},
)

Creating tables

To add metadata to a table:

table = Table('mytable', ...,
    bigquery_description='my table description',
    bigquery_friendly_name='my table friendly name',
    bigquery_default_rounding_mode="ROUND_HALF_EVEN",
    bigquery_expiration_timestamp=datetime.datetime.fromisoformat("2038-01-01T00:00:00+00:00"),
)

To add metadata to a column:

Column('mycolumn', doc='my column description')

To create a clustered table:

table = Table('mytable', ..., bigquery_clustering_fields=["a", "b", "c"])

To create a time-unit column-partitioned table:

from google.cloud import bigquery

table = Table('mytable', ...,
    bigquery_time_partitioning=bigquery.TimePartitioning(
        field="mytimestamp",
        type_="MONTH",
        expiration_ms=1000 * 60 * 60 * 24 * 30 * 6, # 6 months
    ),
    bigquery_require_partition_filter=True,
)

To create an ingestion-time partitioned table:

from google.cloud import bigquery

table = Table('mytable', ...,
    bigquery_time_partitioning=bigquery.TimePartitioning(),
    bigquery_require_partition_filter=True,
)

To create an integer-range partitioned table

from google.cloud import bigquery

table = Table('mytable', ...,
    bigquery_range_partitioning=bigquery.RangePartitioning(
        field="zipcode",
        range_=bigquery.PartitionRange(start=0, end=100000, interval=10),
    ),
    bigquery_require_partition_filter=True,
)

Threading and Multiprocessing

Because this client uses the grpc library, it’s safe to share instances across threads.

In multiprocessing scenarios, the best practice is to create client instances after the invocation of os.fork by multiprocessing.pool.Pool or multiprocessing.Process.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlalchemy_bigquery-1.13.0.tar.gz (117.3 kB view details)

Uploaded Source

Built Distribution

sqlalchemy_bigquery-1.13.0-py2.py3-none-any.whl (40.2 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file sqlalchemy_bigquery-1.13.0.tar.gz.

File metadata

  • Download URL: sqlalchemy_bigquery-1.13.0.tar.gz
  • Upload date:
  • Size: 117.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for sqlalchemy_bigquery-1.13.0.tar.gz
Algorithm Hash digest
SHA256 f994511a409cf2d9067d8f96bf3562ec4d7d7107bdc0e25e23ae818ad8046af1
MD5 08039efd84d6ed842495107ca9703c28
BLAKE2b-256 a59f131b74407ab062c11967f693d9074fc64ccbe9fdee00813f1ecdb98fc4df

See more details on using hashes here.

File details

Details for the file sqlalchemy_bigquery-1.13.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for sqlalchemy_bigquery-1.13.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 7af8e58e9d22db7b1268753463a95e38a9e6d6fe923687557288fa4be74753d4
MD5 8e555c102d25bf3b3bf4f7192940683e
BLAKE2b-256 71c466036315c00d6d74656a8c84d2ccb7033be555858625e7eaf9b4142f60c2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page