Skip to main content

Functionalities to interact with Google and Azure, and clean data

Project description

do-data-utils

Static Typed Checks Continuous Testing Publish Tag to PyPI

This package provides you the functionalities to connect to different cloud sources and data cleaning functions. Package repo on PyPI: do-data-utils - PyPI

Installation

Commands

To install the latest version from main branch, use the following command:

pip install do-data-utils

You can install a specific version, for example,

pip install do-data-utils==2.4.0

Install in requirements.txt

You can also put this source in the requirements.txt.

# requirements.txt

do-data-utils==2.4.0

Available Subpackages

  • google – Utilities for Google Cloud Platform.
  • azure – Utilities for Azure services.
  • pathutils – Utilities related to paths.
  • preprocessing – Utilities for data preprocessing.

For a full list of functions, see the overview documentation.

Example Usage

The concept of using this revolves around the idea that:

  1. You keep service account JSON secrets (for cloud services) in GCP secret manager
  2. You have local JSON secret file for accessing the GCP secret manager
  3. Retrive the secret you want to interact with cloud platform from GCP secret manager
  4. Do your stuff...

Google

GCS

Download
from do_data_utils.google import get_secret, gcs_to_df


# Load secret key and get the secret to access GCS
secret_path = 'secrets/secret-manager-key.json'
secret = get_secret(secret_id='gcs-secret-id-dev', secret=secret_path, as_json=True)

# Download a csv file to DataFrame
gcspath = 'gs://my-ai-bucket/my-path-to-csv.csv'
df = gcs_to_df(gcspath, secret, polars=False)
from do_data_utils.google import get_secret, gcs_to_dict


# Load secret key and get the secret to access GCS
secret_path = 'secrets/secret-manager-key.json'
secret = get_secret(secret_id='gcs-secret-id-dev', secret=secret_path, as_json=True)

# Download the content from GCS
gcspath = 'gs://my-ai-bucket/my-path-to-json.json'
my_dict = gcs_to_dict(gcspath, secret=secret)
Upload
from do_data_utils.google import get_secret, dict_to_json_gcs


# Load secret key and get the secret to access GCS
secret_path = 'secrets/secret-manager-key.json'

# No need to read in the secret info from version 2.3.0
with open('secrets/secret-manager-key.json', 'r') as f:
    secret_info = json.load(f)

# you can pass in either dict or path to JSON in `secret` argument
secret = get_secret(secret_id='gcs-secret-id-dev', secret=secret_info, as_json=True) 

my_setting_dict = {
    'param1': 'abc',
    'param2': 'xyz',
}

gcspath = 'gs://my-bucket/my-path-to-json.json'
dict_to_json_gcs(dict_data=my_setting_dict, gcspath=gcspath, secret=secret)

GBQ

from do_data_utils.google import get_secret, gbq_to_df


# Load secret key and get the secret to access GCS
with open('secrets/secret-manager-key.json', 'r') as f:
    secret_info = json.load(f)

# you can pass in either dict or path to JSON in `secret` argument
secret = get_secret(secret_id='gbq-secret-id-dev', secret=secret_info, as_json=True)

# Query
query = 'select * from my-project.my-dataset.my-table'
df = gbq_to_df(query, secret, polars=False)

Azure/Databricks

from do_data_utils.azure import databricks_to_df


# Load secret key and get the secret to access GCS
with open('secrets/secret-manager-key.json', 'r') as f:
    secret_info = json.load(f)

secret = get_secret(secret_id='databricks-secret-id-dev', secret=secret_info, as_json=True)

# Download from Databricks sql
query = 'select * from datadev.dsplayground.my_table'
df = databricks_to_df(query, secret, polars=False)

Path utils

from do_data_utils.pathutils import add_project_root

# Adds your root folder to sys.path,
# so you can do imports from the root directory
add_project_root(levels_up=1)

Preprocessing

from do_data_utils.preprocessing import clean_phone, clean_citizenid

phone_numbers = '090-123-4567|0912345678|0901234567-9'
phones_valid = clean_phone(phone_numbers) # Gets the valid phone numbers

citizenid = '0123456789012'
citizenid_cleaned = clean_citizenid(citizenid)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

do_data_utils-2.4.0.tar.gz (22.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

do_data_utils-2.4.0-py3-none-any.whl (32.4 kB view details)

Uploaded Python 3

File details

Details for the file do_data_utils-2.4.0.tar.gz.

File metadata

  • Download URL: do_data_utils-2.4.0.tar.gz
  • Upload date:
  • Size: 22.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for do_data_utils-2.4.0.tar.gz
Algorithm Hash digest
SHA256 c9f662f8474961e87b493573d24a8dd67bbcf022acdb3f14146ab29b3f0edf77
MD5 59be4b963dda6971690e3709f51faa8d
BLAKE2b-256 189f2b48ff700a7d67dfce253db0d25aae19b942791dc38409cf82de9b11629f

See more details on using hashes here.

File details

Details for the file do_data_utils-2.4.0-py3-none-any.whl.

File metadata

  • Download URL: do_data_utils-2.4.0-py3-none-any.whl
  • Upload date:
  • Size: 32.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.7

File hashes

Hashes for do_data_utils-2.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 24aaa672553222ebe8b5f456a6bd5e0b3d4fc064de89788ec627dcaa179d09f0
MD5 bcd92b06399396791791f66f46bb3e6d
BLAKE2b-256 1bff40481bf5eed02fa439f57af2cf0334c78fa8a391aaf120dc599b3e028442

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page