Skip to main content

The faster, more intuitive way of working with Google Cloud Platform services and frameworks

Project description

LEGAL DISCLAIMER

Neither this package nor the author, tomathon, are affiliated with or endorsed by Google. The inclusion of Google trademark(s), if any, upon this webpage is solely to identify Google contributors, goods, or services, and not for commercial purposes.


🚀 FastGCP is the faster, more intuitive way of working with Google Cloud Platform services and frameworks.

The fastgcp package is built on top of canonical Google Python packages(s) without any alteration to Google's base code.

Full documentation can be found here (WIP)


Installation

fastgcp is installed using pip with the command:

$ pip install fastgcp


The BigQuery class

One of the main FastGCP classes is BigQuery. It allows users to work with BQ database tables and objects in a more Pythonic way.

Quickstart

Import BigQuery:

from fastgcp import BigQuery

Initialize BigQuery using your GCP Project name:

bq = BigQuery(project_id = "gcp-project-name")

Alternatively, to initialize using a specific service account key, pass the full key path to your key.json:

bq = BigQuery(
    project_id = "gcp-project-name",
    service_acct_key_path = "/full/path/to/your/service/account/key.json"
)

Common BigQuery Methods:

bq.read_bigquery

Read an existing BigQuery table into a DataFrame

read_bigquery(bq_dataset_dot_table = None, date_cols = [], preview_top = None, to_verbose = True)

  • bq_dataset_dot_table : the "dataset-name.table-name" path of the existing BigQuery table
  • date_cols : [optional] column(s) passed inside a list that should be parsed as dates
  • preview_top : [optional] only read in the top N rows
  • to_verbose : should info be printed? defaults to True
  • use_polars : should a polars DataFrame be returned instead of a pandas DataFrame? Defaults to True

EX:

my_table = bq.read_bigquery("my_bq_dataset.my_bq_table")

my_table = bq.read_bigquery("my_bq_dataset.my_bq_table", date_cols = ['date'])

-----------

bq.read_custom_query

Read an existing BigQuery table into a DataFrame, using a custom SQL expression

read_custom_query(custom_query, to_verbose = True)

  • custom_query : the custom BigQuery SQL query that will produce a table to be read into a DataFrame
  • to_verbose : should info be printed? defaults to True
  • use_polars : should a polars DataFrame be returned instead of a pandas DataFrame? Defaults to True

EX:

my_custom_table = bq.read_custom_query("""
    SELECT
        date,
        sales,
        products
    FROM
        my_bq_project_id.my_bq_dataset.my_bq_table
    WHERE
        sales_month = 'June'
""")

-----------

bq.write_bigquery

Write a DataFrame to BigQuery

write_bigquery(df, bq_dataset_dot_table = None, use_schema = None, append_to_existing = False, to_verbose = True)

  • df : the DataFrame to be written to a BigQuery table
  • bq_dataset_dot_table : the "dataset-name.table-name" path of the existing BigQuery table
  • use_schema : [optional] a custom schema for the BigQuery table. NOTE: see bq.guess_schema below
  • append_to_existing : should the DataFrame be appended to an existing BigQuery table? defaults to False (create new / overwrite)
  • to_verbose : should info be printed? defaults to True

EX:

bq.write_bigquery(my_df, "my_bq_dataset.my_data")

bq.write_bigquery(my_df, "my_bq_dataset.my_data", append_to_existing = True)

-----------

bq.send_query

Send a custom SQL query to BigQuery. Process is carried out within BigQuery. Nothing is returned

send_query(que, to_verbose = True)

  • que : the custom SQL query to be sent and carried out within BigQuery
  • to_verbose : should info be printed? defaults to True

EX:

bq.send_query("""
    CREATE TABLE my_bq_project_id.my_bq_dataset.my_new_bq_table AS 
    (
        SELECT
            date,
            sales,
            products
        FROM
            my_bq_project_id.my_bq_dataset.my_bq_table
        WHERE
            sales_month = 'June'
    )
""")

-----------

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastgcp-0.1.0.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastgcp-0.1.0-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file fastgcp-0.1.0.tar.gz.

File metadata

  • Download URL: fastgcp-0.1.0.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for fastgcp-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b28db9b27a242a1882e34de1130dcbf89f6498b8a6d16c3f4774fa95bc2e9d37
MD5 e7201c503c1378e456e45a45c1daf336
BLAKE2b-256 ccab32e969a78732b3baafc5e9a5855b71e8013deeab0eba7cfad1e1e8684d2c

See more details on using hashes here.

File details

Details for the file fastgcp-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: fastgcp-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for fastgcp-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5b26aa643d46da9b07a67e7b354271a70234376cc086162eeb3010e652fe8526
MD5 cdbb6ed14567c2bc6e6dd74d5fdb56e7
BLAKE2b-256 262fe850c6fdbf75459a0d2c79234e2d50d37ddb7372649aafebfd46a816e0c5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page