Skip to main content

Automatically write pandas DataFrames to SQL by creating pipelines in Azure Data Factory with copy activity from blob to SQL

Project description

logo


Downloads Open Source Code style: black PyPI Latest release

DF to Azure

Python module for fast upload of pandas DataFrame to Azure SQL Database using automatic created pipelines in Azure Data Factory.

Introduction

The purpose of this project is to upload large datasets using Azure Data Factory combined with an Azure SQL Server. In steps the following process kicks off:

1. The data will be uploaded as a .csv file to Azure Blob storage.
2. A SQL table is prepared based on pandas DataFrame types, which will be converted to the corresponding SQLAlchemy types.
3. A pipeline is created in datafactory for uploading the .csv from the Blob storage into the SQL table.
4. The pipeline is triggered, so that the .csv file is bulk inserted into the SQL table.

How it works

Based on the following attributes, it is possible to bulk insert your dataframe into the SQL Database:

from df_to_azure import df_to_azure

df_to_azure(df=df, tablename="table_name", schema="schema", method="create")
  1. df: dataframe you wish to export
  2. tablename: desired name of the table
  3. schema: desired sql schema
  4. method: option for "create" "append" or "upsert"
  5. id_field: id field of the table. Necessary if method is set to "upsert"

Important: the csv's are uploaded to a container called dftoazure, so create this in your storage account before using this module.

Upsert / create or append

It is possible to upsert the SQL table with (new) records, if present in the dataframe you want to upload. Based on the id_field, the SQL table is being checked on overlapping values. If there are new records, the "old" records will be updated in the SQL table. The new records will be uploaded and appended to the current SQL table.

Settings

To use this module, you need to add the azure subscriptions settings and azure data factory settings to your environment variables. We recommend to work with .env files (or even better, automatically load them with Azure Keyvault) and load them in during runtime. But this is optional and they can be set as system variables as well. Use the following template when using .env

Parquet

Since version 0.6.0, functionality for uploading dataframe to parquet is supported. simply add argument parquet=True to upload the dataframe to the Azure storage container parquet. The arguments tablename and schema will be used to create a folder structure. if parquet is set to True, the dataset will not be uploaded to a SQL database.

# --- ADF SETTINGS ---

# data factory settings
rg_name : ""
rg_location: "westeurope"
df_name : ""

# blob settings
ls_blob_account_name : ""
ls_blob_container_name : ""
ls_blob_account_key : ""

# SQL settings
SQL_SERVER: ""
SQL_DB: ""
SQL_USER: ""
SQL_PW: ""

# --- AZURE SETTINGS ---
# azure credentials for connecting to azure subscription.
client_id : ""
secret : ""
tenant : ""
subscription_id : ""

Maintained by Zypp:

Support:

For support on using this module, you can reach us at hello@zypp.io


Testing

To run the test suite, use:

pytest df_to_azure

To run pytest for a single test:

pytest df_to_azure/tests/test_df_to_azure.py::test_duplicate_keys_upsert

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

df_to_azure-1.0.1.tar.gz (18.3 kB view details)

Uploaded Source

Built Distribution

df_to_azure-1.0.1-py3-none-any.whl (18.5 kB view details)

Uploaded Python 3

File details

Details for the file df_to_azure-1.0.1.tar.gz.

File metadata

  • Download URL: df_to_azure-1.0.1.tar.gz
  • Upload date:
  • Size: 18.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for df_to_azure-1.0.1.tar.gz
Algorithm Hash digest
SHA256 5acf9da3c740054d166818c990754d8f78ecd709e424775bc5b7829ffc24b308
MD5 d3636b4af7e0d18ee07215856f3afbb7
BLAKE2b-256 922c20b2501fa2aa31ae5ecce56a2a30d1aa44037b6a01ae0ac9719d31f21b35

See more details on using hashes here.

File details

Details for the file df_to_azure-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: df_to_azure-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 18.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for df_to_azure-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 af0dfc1fd5cca2125419e23e480fc8d1f9f510882f2ebdb4b71e80533457694a
MD5 1689b9fb868f53e8af92e15a390a2b1f
BLAKE2b-256 f8a080c1bd55cf61dac66f52824b7f9aaec2456fff17648acb1973f9ce6dfa71

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page