Skip to main content

The faster, more intuitive way of working with Google Cloud Platform services and frameworks

Project description

LEGAL DISCLAIMER

Neither this package nor the author, tomathon, are affiliated with or endorsed by Google. The inclusion of Google trademark(s), if any, upon this webpage is solely to identify Google contributors, goods, or services, and not for commercial purposes.


🚀 FastGCP is the faster, more intuitive way of working with Google Cloud Platform services and frameworks.

The fastgcp package is built on top of canonical Google Python packages(s) without any alteration to Google's base code.

Full documentation can be found here (WIP)


Installation

fastgcp is installed using pip with the command:

$ pip install fastgcp


The BigQuery class

One of the main FastGCP classes is BigQuery. It allows users to work with BQ database tables and objects in a more Pythonic way.

Quickstart

Import BigQuery:

from fastgcp import BigQuery

Initialize BigQuery using your GCP Project name:

bq = BigQuery(project_id = "gcp-project-name")

Alternatively, to initialize using a specific service account key, pass the full key path to your key.json:

bq = BigQuery(
    project_id = "gcp-project-name",
    service_acct_key_path = "/full/path/to/your/service/account/key.json"
)

Common BigQuery Methods:

bq.read_bigquery

Read an existing BigQuery table into a DataFrame

read_bigquery(bq_dataset_dot_table = None, date_cols = [], preview_top = None, to_verbose = True)

  • bq_dataset_dot_table : the "dataset-name.table-name" path of the existing BigQuery table
  • date_cols : [optional] column(s) passed inside a list that should be parsed as dates
  • preview_top : [optional] only read in the top N rows
  • to_verbose : should info be printed? defaults to True
  • use_polars : should a polars DataFrame be returned instead of a pandas DataFrame? Defaults to True

EX:

my_table = bq.read_bigquery("my_bq_dataset.my_bq_table")

my_table = bq.read_bigquery("my_bq_dataset.my_bq_table", date_cols = ['date'])

-----------

bq.read_custom_query

Read an existing BigQuery table into a DataFrame, using a custom SQL expression

read_custom_query(custom_query, to_verbose = True)

  • custom_query : the custom BigQuery SQL query that will produce a table to be read into a DataFrame
  • to_verbose : should info be printed? defaults to True
  • use_polars : should a polars DataFrame be returned instead of a pandas DataFrame? Defaults to True

EX:

my_custom_table = bq.read_custom_query("""
    SELECT
        date,
        sales,
        products
    FROM
        my_bq_project_id.my_bq_dataset.my_bq_table
    WHERE
        sales_month = 'June'
""")

-----------

bq.write_bigquery

Write a DataFrame to BigQuery

write_bigquery(df, bq_dataset_dot_table = None, use_schema = None, append_to_existing = False, to_verbose = True)

  • df : the DataFrame to be written to a BigQuery table
  • bq_dataset_dot_table : the "dataset-name.table-name" path of the existing BigQuery table
  • use_schema : [optional] a custom schema for the BigQuery table. NOTE: see bq.guess_schema below
  • append_to_existing : should the DataFrame be appended to an existing BigQuery table? defaults to False (create new / overwrite)
  • to_verbose : should info be printed? defaults to True

EX:

bq.write_bigquery(my_df, "my_bq_dataset.my_data")

bq.write_bigquery(my_df, "my_bq_dataset.my_data", append_to_existing = True)

-----------

bq.send_query

Send a custom SQL query to BigQuery. Process is carried out within BigQuery. Nothing is returned

send_query(que, to_verbose = True)

  • que : the custom SQL query to be sent and carried out within BigQuery
  • to_verbose : should info be printed? defaults to True

EX:

bq.send_query("""
    CREATE TABLE my_bq_project_id.my_bq_dataset.my_new_bq_table AS 
    (
        SELECT
            date,
            sales,
            products
        FROM
            my_bq_project_id.my_bq_dataset.my_bq_table
        WHERE
            sales_month = 'June'
    )
""")

-----------

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastgcp-0.1.1.tar.gz (7.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fastgcp-0.1.1-py3-none-any.whl (6.8 kB view details)

Uploaded Python 3

File details

Details for the file fastgcp-0.1.1.tar.gz.

File metadata

  • Download URL: fastgcp-0.1.1.tar.gz
  • Upload date:
  • Size: 7.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for fastgcp-0.1.1.tar.gz
Algorithm Hash digest
SHA256 064c3c2e5dab03b4730b53b32b084bac79c087cdec94e28934f9f66c10724200
MD5 45d33026553bcf1ed07c05d86f4b132b
BLAKE2b-256 c204246abf9d33e4c1dc32007f5a9c6f0c2723467a7099f7d48774c586bf6e11

See more details on using hashes here.

File details

Details for the file fastgcp-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: fastgcp-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.3

File hashes

Hashes for fastgcp-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 23d0d2abe0c1d6fa76d3f1cef140c54374b9a76dad36e3e9190e6469af012830
MD5 a5ba15858a378ad149280bfaa8dec809
BLAKE2b-256 9679b17d4101860ab08f335bd66efa93fffb2dd9a4b677edec3481fe232da163

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page