Skip to main content

No project description provided

Project description

neoval-py-utils

neoval-py-utils is a python utilities package developed by Neoval to assist with the Extract, Load and Transform (ELT/ETL) of data from Google Cloud Platform (GCP) services.

The main difference between this utilities package and BigQuery provided APIs is a faster export. Running a BigQuery extract_job to a bucket and downloading it is faster and can be improved by increasing the machine's download speed. We also use of local caching so that the same query will not needless be repeatedly be executed / downloaded. With this package the user can also create, databases that can be embedded to a machine for a website or application.

Functionalities include:

  • exporter
    • Exporting data from BigQuery(bq) to a pandas DataFrame, pyArrow Table or Google Cloud Storage (GCS).
    • Can be a bq query or a bq table.
  • ipdb
    • Building and preparing embedded in-process databases (IPDB) from BigQuery datasets.
    • Supports SQLite and DuckDB and configured with a YAML file please see examples below.
    • Supports templating for transformations post initial build.

Development

All development must take place on a feature branch and a pull request is required; a user is not allowed to commit directly to main. The automated workflow in this repo (using python-semantic-release) requires the use of angular style commit messages to update the package version and CHANGELOG. All commits must be formatted in this way before a user is able to merge a PR; a user who may want to develop without using this format for all commits can simply squash non-angular commit messages prior to merge. A PR may only be merged by the rebase and merge method. This is to ensure that only angular style commits end up on main.

Upon merge to main, the deploy workflow will facilitate the following:

  • bump the version in pyproject.toml
  • update the CHANGELOG using all commits added
  • tag and release, if required
  • publish to PyPi

Getting Started

Prerequisites

TODO

Tests

For the integration tests to pass you will need to be authenticated with a Google project. With storage admin and bigquery job permissions.

You can auth with GOOGLE_APPLICATION_CREDENTIALS as an environment variable or by running gcloud auth application-default login.

Specify gcp project with gcloud config set project <project-id>.

Run unit and integration tests with poetry run task test.

To run with coverage tests with poetry run task test-with-coverage.

Usage

TODO installation with pipy

Assuming that installed neoval-py-utilsis successfully as a dependency and have permissions to gcp storage and bigquery.

Examples of usage

Export BQ datasets or Queries >> Dataframe or GCS

from neoval_py_utils.exporter import Exporter
# To query a bigquery table and return a polar dataframe. Caches results, keeps for default 12 hours.
exporter = Exporter() # To use cache, pass path to the constructor. Eg Exporter(cache_dir=./cache)
pl_df = exporter.export("SELECT word FROM `bigquery-public-data.samples.shakespeare` GROUP BY word ORDER BY word DESC LIMIT 3")

# `export` is aliased by `<` operator. Will give same results as above.
pl_df = exporter < "SELECT word FROM `bigquery-public-data.samples.shakespeare` GROUP BY word ORDER BY word DESC LIMIT 3"


# To export a whole table
al_pl_df = exporter.export("bigquery-public-data.samples.shakespeare")


# To export bigquery table to a parquet file in a gcp storage bucket. Returns a list of blobs.
blobs = exporter.bq_to_gcs("my-dataset.my-table")

Create In-process(Embedded) Databases

# Python cli example to build in-process db
poetry run ipdb build <DBT_DATASET> <GCLOUD_PROJECT_ID> <DB_PATH> <CONFIG_PATH> --upload-bucket <UPLOAD_BUCKET> 
# If you would like to run it in locally in this repo, you can run
# Upload bucket is optional, this will upload the in-process db to the specified bucket.
# Ensure your PYTHONPATH=./src
poetry run ipdb build samples bigquery-public-data tests/artifacts/in_process_db tests/resources/good.config.yaml

Example of config.yaml

sqlite:
    -   name: shakespeare
        primary_key: null
duckdb:
    -   name: shakespeare
        primary_key: null
        description: "Word counts from Shakespeare work - gcp public dataset"
# To apply sql templates after the in-process db is built
poetry run ipdb prepare <DBT_DATASET> <GCLOUD_PROJECT_ID> <DB_PATH> <TEMPLATES_PATH>
# If you would like to run it in locally in this repo, you can run
poetry run ipdb prepare samples bigquery-public-data tests/artifacts/in_process_db tests/resources/templates
# For more info you can run
poetry run ipdb --help # which will return 
                                                                                                                                     
 Usage: ipdb [OPTIONS] COMMAND [ARGS]...                                                                                                                                                               
╭─ Commands ────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ build                           Build the in process database(s).                                                 │
│ make-config                     Prints a default configuration to be used with the build command.                 │
│ prepare                         Run scripts to add views/virtual tables/etc. to the database(s).                  │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

neoval_py_utils-0.5.0.tar.gz (17.3 kB view details)

Uploaded Source

Built Distribution

neoval_py_utils-0.5.0-py3-none-any.whl (17.3 kB view details)

Uploaded Python 3

File details

Details for the file neoval_py_utils-0.5.0.tar.gz.

File metadata

  • Download URL: neoval_py_utils-0.5.0.tar.gz
  • Upload date:
  • Size: 17.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for neoval_py_utils-0.5.0.tar.gz
Algorithm Hash digest
SHA256 c6e9d883f799924c2b4306e0849b2cb6ed25130a960c8ac66026d6ca8520a23d
MD5 dfbfb0920c37472ad7dddddd092f07be
BLAKE2b-256 5c1f31b1c1f3f56463e6b806841ea3dae78de56e7407499c1c52da610be89a45

See more details on using hashes here.

File details

Details for the file neoval_py_utils-0.5.0-py3-none-any.whl.

File metadata

File hashes

Hashes for neoval_py_utils-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 882b16bae8c394138bd1bff951107e3f6d559f9d333f00de24ed1db0782a2555
MD5 6cf560f8d5d63413116d1febeb0481b4
BLAKE2b-256 d3f0dd83ef7064012803c9f24f014afc3e8c2263ce45adb43994f396ce230e8e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page