A package to ease interaction with cloud services, DB connections and commonly used functionalities in data analytics.
Project description
Instackup
This Python library is an open source way to standardize and simplify connections with cloud-based tools, databases and commonly used tools in data manipulation and analysis. It can help BI teams by having a unified source code for local development and testing as well as remote production (automated scheduled run) environments.
Index
Current release
Version 0.3.0 (beta)
Prerequisites
- Have a Python 3.6 version or superior installed;
- Create a YAML (or JSON) file with credentials information;
- [Optional but recommended] Configure an Environment Variable that points where the Credentials file is.
1. Have a Python 3.6 version or superior installed
Got to this link e download the most current version that is compatible with this package.
2. Create a YAML (or JSON) file with credentials information
Use the files secret_template.yml or secret_blank.yml as a base or copy and paste the code bellow and modify its values to the ones in your credentials/projects:
#################################################################
# #
# ACCOUNTS CREDENTIALS. DO NOT SHARE THIS FILE. #
# #
# Specifications: #
# - For the credentials you don't have, leave it blank. #
# - Keep Google's secret file in the same folder as this file. #
# - BigQuery project_ids must be strings, i.e., inside quotes. #
# #
# Recommendations: #
# - YAML specification: https://yaml.org/spec/1.2/spec.html #
# - Keep this file in a static path like a folder within the #
# Desktop. Ex.: C:\Users\USER\Desktop\Credentials\secret.yml #
# #
#################################################################
Location: local
Google:
default:
project_id: project_id
project_name: project_name
project_number: "000000000000"
secret_filename: api_key.json
AWS:
default:
access_key: AWSAWSAWSAWSAWSAWSAWS
secret_key: some_secret_key_value
RedShift:
default:
cluster_credentials:
dbname: db
user: masteruser
host: blablabla.random.us-east-2.redshift.amazonaws.com
cluster_id: cluster
port: 5439
master_password:
dbname: db
user: masteruser
host: blablabla.random.us-east-2.redshift.amazonaws.com
password: masterpassword
port: 5439
PostgreSQL:
default:
dbname: postgres
user: postgres
host: localhost
password: ""
port: 5432
MySQL:
default:
dbname: mydb
host: localhost
user: root
password: ""
port: 3306
Save this file with .yml
extension in a folder where you know the path won't be modified, like the Desktop folder (Example: C:\Users\USER\Desktop\Credentials\secret.yml
).
If you prefer, you can follow this step using a JSON file instead. Follow the same instructions but using .json
instead of .yml
.
3. [Optional but recommended] Configure an Environment Variable that points where the Credentials file is.
To configure the Environment Variable, follow the instructions bellow, based on your Operating System.
Windows
- Place the YAML (or JSON) file in a folder you won't change its name or path later;
- In Windows Search, type
Environment Variables
and click in the Control Panel result; - Click on the button
Environment Variables...
; - In Environment Variables, click on the button
New
; - In Variable name type
CREDENTIALS_HOME
and in Variable value paste the full path to the recently created YAML (or JSON) file; - Click Ok in the 3 open windows.
Linux/MacOS
- Place the YAML (or JSON) file in a folder you won't change its name or path later;
- Open the file
.bashrc
. If it doesn't exists, create one in theHOME
directory. If you don't know how to get there, open the Terminal, typecd
and then ENTER; - Inside the file, in a new line, type the command:
export CREDENTIALS_HOME="/path/to/file"
, replacing the content inside quotes by the full path to the recently created YAML (or JSON) file; - Save the file and restart all open Terminal windows.
Note: If you don't follow this last prerequisite, you need to set the environment variable manually inside the code. To do that, inside your python code, after the imports, type the command (replacing the content inside quotes by the full path to the recently created YAML (or JSON) file):
os.environ["CREDENTIALS_HOME"] = "/path/to/file"
Installation
Go to the Terminal and type:
pip install instackup
Documentation
Check the documentation by clicking in each topic.
- bigquery_tools
- Global Variables
- BigQueryTool
- __init__(self, authenticate=True)
- query(self, sql_query)
- query_and_save_results(self, sql_query, dest_dataset, dest_table, writing_mode="TRUNCATE", create_table_if_needed=False)
- list_datasets(self)
- create_dataset(self, dataset, location="US")
- list_dataset_permissions(self, dataset)
- add_dataset_permission(self, dataset, role, email_type, email)
- remove_dataset_permission(self, dataset, email)
- list_tables_in_dataset(self, dataset, get=None, return_type="dict")
- get_table_schema(self, dataset, table)
- convert_postgresql_table_schema(self, dataframe, parse_json_columns=True)
- convert_multiple_postgresql_tables_schema(self, dataframe, parse_json_columns=True)
- convert_dataframe_to_numeric(dataframe, exclude_columns=[], **kwargs)
- clean_dataframe_column_names(dataframe, allowed_chars="abcdefghijklmnopqrstuvwxyz0123456789", special_treatment={})
- upload(self, dataframe, dataset, table, **kwargs)
- create_empty_table(self, dataset, table, schema)
- upload_from_gcs(self, dataset, table, gs_path, file_format="CSV", header_rows=1, delimiter=",", encoding="UTF-8", ignore_unknown_values=False, max_bad_records=0, writing_mode="APPEND", create_table_if_needed=False, schema=None)
- upload_from_file(self, dataset, table, file_location, file_format="CSV", header_rows=1, delimiter=",", encoding="UTF-8", ignore_unknown_values=False, max_bad_records=0, writing_mode="APPEND", create_table_if_needed=False, schema=None)
- start_transfer(self, project_path=None, project_name=None, transfer_name=None)
- gcloudstorage_tools
- GCloudStorageTool
- __init__(self, uri=None, bucket=None, subfolder="", filename=None, authenticate=True)
- bucket(self) @property
- blob(self) @property
- uri(self) @property
- set_bucket(self, bucket)
- set_subfolder(self, subfolder)
- select_file(self, filename)
- list_all_buckets(self)
- get_bucket_info(self, bucket=None)
- get_file_info(self, filename=None, info=None)
- list_contents(self, yield_results=False)
- rename_file(self, new_filename)
- rename_subfolder(self, new_subfolder)
- upload_file(self, filename, remote_path=None)
- upload_subfolder(self, folder_path)
- upload_from_dataframe(self, dataframe, file_format='CSV', filename=None, overwrite=False, **kwargs)
- download_file(self, download_to=None, remote_filename=None, replace=False)
- download_subfolder(self)
- download_on_dataframe(self, **kwargs)
- download_as_string(self, remote_filename=None, encoding="UTF-8")
- delete_file(self)
- delete_subfolder(self)
- GCloudStorageTool
- general_tools
- gsheets_tools
- GSheetsTool
- __init__(self, sheet_url=None, sheet_key=None, sheet_gid=None, auth_mode='secret_key', read_only=False, scopes=['https://www.googleapis.com/auth/spreadsheets', 'https://www.googleapis.com/auth/drive'])
- set_spreadsheet_by_url(self, sheet_url)
- set_spreadsheet_by_key(self, sheet_key)
- set_worksheet_by_id(self, sheet_gid)
- download(self)
- upload(self, dataframe, write_mode="APPEND", force_upload=False)
- GSheetsTool
- heroku_tools
- redshift_tools
- RedShiftTool
- __init__(self, connect_by_cluster=True)
- connect(self, fail_silently=False)
- commit(self)
- rollback(self)
- close_connection(self)
- execute_sql(self, command, fail_silently=False)
- query(self, sql_query, fetch_through_pandas=True, fail_silently=False)
- describe_table(self, table, schema="public", fetch_through_pandas=True, fail_silently=False)
- get_all_db_info(self, get_json_info=True, fetch_through_pandas=True, fail_silently=False)
- unload_to_S3(self, redshift_query, s3_path, filename, unload_options="MANIFEST GZIP ALLOWOVERWRITE REGION 'us-east-2'")
- RedShiftTool
- s3_tools
- S3Tool
- __init__(self, uri=None, bucket=None, subfolder="")
- bucket(self) @property
- uri(self) @property
- set_bucket(self, bucket)
- set_subfolder(self, subfolder)
- rename_file(self, old_filename, new_filename)
- rename_subfolder(self, new_subfolder)
- list_all_buckets(self)
- list_contents(self, yield_results=False)
- upload_file(self, filename, remote_path=None)
- upload_subfolder(self, folder_path)
- download_file(self, remote_path, filename=None)
- download_subfolder(self, download_to=None)
- delete_file(self, filename, fail_silently=False)
- delete_subfolder(self)
- S3Tool
- sql_tools
Version logs
See what changed in every version.
- Beta releases
- Alpha releases
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file instackup-0.3.0.tar.gz
.
File metadata
- Download URL: instackup-0.3.0.tar.gz
- Upload date:
- Size: 38.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.9.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6b06e5133d713675fec72e07cf5748a35692054ce117e102a92a326de9bc3991 |
|
MD5 | 872b3e8334c7e44e97d7496e1c2bed90 |
|
BLAKE2b-256 | 4fa496e7b24e7fa15a8d0735bdf6a04a931c3557b8455fa9a6a1fdc5b4d9c7b6 |
File details
Details for the file instackup-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: instackup-0.3.0-py3-none-any.whl
- Upload date:
- Size: 38.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.9.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1bc62b219d00ccaaba268297b66db50566ee882db7dcb7151cce364fb10430cb |
|
MD5 | 50228e8334bc969d5726f2e3c10f315e |
|
BLAKE2b-256 | f95f862c3664fb5a8d7db42686968093b0e276938f20bacd14ab50ef9b0c069a |