Skip to main content

BigQuery ML Utils

Project description

BigQuery ML Utils

BigQuery ML (aka. BQML) lets you create and execute machine learning models in BigQuery using standard SQL queries. The BigQuery ML Utils library is an integrated suite of machine learning tools for building and using BigQuery ML models.

Installation

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it's possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Mac/Linux

    pip install virtualenv
    virtualenv <your-env>
    source <your-env>/bin/activate
    <your-env>/bin/pip install bigquery-ml-utils

Windows

    pip install virtualenv
    virtualenv <your-env>
    <your-env>\Scripts\activate
    <your-env>\Scripts\pip.exe install bigquery-ml-utils

Overview

Inference

Transform Predictor

The Transform Predictor feeds input data into the BQML model trained with TRANSFORM. It performs both preprocessing and postprocessing on the input and output. The first argument is a SavedModel which represents the TRANSFORM clause for feature preprocessing. The second argument is a SavedModel or XGBoost Booster which represents the model logic.

XGBoost Predictor

The XGBoost Predictor feeds input data into the BQML XGBoost model. It performs both preprocessing and postprocessing on the input and output. The first argument is a XGBoost Booster which represents the model logic. The following arguments are model assets.

Tensorflow Ops

BQML Tensorflow Custom Ops provides SQL functions (Date functions, Datetime functions, Time functions and Timestamp functions) that are not available in TensorFlow. The implementation and function behavior align with the BigQuery. This is part of an effort to bridge the gap between the SQL community and the Tensorflow community. The following example returns the same result as TIMESTAMP_ADD(timestamp_expression, INTERVAL int64_expression date_part)

>>> timestamp = tf.constant(['2008-12-25 15:30:00+00', '2023-11-11 14:30:00+00'], dtype=tf.string)
>>> interval = tf.constant([200, 300], dtype=tf.int64)
>>> result = timestamp_ops.timestamp_add(timestamp, interval, 'MINUTE')
tf.Tensor([b'2008-12-25 18:50:00.0 +0000' b'2023-11-11 19:30:00.0 +0000'], shape=(2,), dtype=string)

Note: /usr/share/zoneinfo is needed for parsing time zone which might not be available in your OS. You will need to install tzdata to generate it. For example, add the following code in your Dockerfile.

RUN apt-get update && DEBIAN_FRONTEND="noninteractive" \
    TZ="America/Los_Angeles" apt-get install -y tzdata

Model Generator

Text Embedding Model Generator

The Text Embedding Model Generator automatically loads a text embedding model from Tensorflow hub and integrates a signature such that the resulting model can be immediately integrated within BQML. Currently, the NNLM and BERT embedding models can be selected.

NNLM Text Embedding Model

The NNLM model has a model size of <150MB and is recommended for phrases, news, tweets, reviews, etc. NNLM does not carry any default signatures because it is designed to be utilized as a Keras layer; however, the Text Embedding Model Generator takes care of this.

SWIVEL Text Embedding Model

The SWIVEL model has a model size of <150MB and is recommended for phrases, news, tweets, reviews, etc. SWIVEL does not require pre-processing because the embedding model already satisfies BQML imported model requirements. However, in order to align signatures for NNLM, SWIVEL, and BERT, the Text Embedding Model Generator establishes the same input label for SWIVEL.

BERT Text Embedding Model

The BERT model has a model size of ~200MB and is recommended for phrases, news, tweets, reviews, paragraphs, etc. The BERT model does not carry any default signatures because it is designed to be utilized as a Keras layer. The Text Embedding Model Generator takes care of this and also integrates a text preprocessing layer for BERT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bigquery_ml_utils-1.4.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl (7.6 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.24+ x86-64manylinux: glibc 2.28+ x86-64

File details

Details for the file bigquery_ml_utils-1.4.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for bigquery_ml_utils-1.4.3-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 6551c913b9018239970740400bb187aec57af6ef86aa514abc8359e130c98d49
MD5 9d8b895501a6eee0cf1d5a4e9db3a7ff
BLAKE2b-256 f0fc89d6afcfdc280b6a71e8aaa07b45b09c93568b5231e1f8f8313400641a07

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page