Skip to main content

Load data from an on-prem SQL database to Azure Storage as JSON or CSV.

Project description

SQL2Azure Logo

SQL2Azure

SQL2Azure is a Python package designed to simplify the process of loading data from an on-premises SQL Server database into Azure Blob Storage. With this package, users can execute SQL queries to retrieve data and upload it in two formats: JSON or CSV. Whether you’re looking to move data for backup, reporting, or integration purposes, SQL2Azure makes it easy to seamlessly transfer your database information to Azure.

Table of Contents

Description

SQL2Azure helps to directly transfer data from an on-premises SQL Server database to an Azure Blob Storage container. This package provides two main functions for uploading data:

  • JSON Format: The data retrieved from the SQL query is converted into JSON format and uploaded to Azure Blob Storage.
  • CSV Format: The data retrieved from the SQL query is converted into CSV format and uploaded to Azure Blob Storage.

The tool connects to your SQL Server using a connection string, executes a SQL query, fetches the data, and then uploads it to a specified Azure Blob Storage container in the format of your choice.

Key Features:

  • SQL Server to Azure Blob: Automates the process of transferring data between SQL Server and Azure Blob Storage.
  • Two File Formats: Uploads data as either JSON or CSV files.
  • Customizable Connection: The user can specify SQL queries, Azure container names, blob names, and connection strings.
  • Error Handling: Returns clear status messages for success or failure, including the number of rows uploaded or any error encountered.

Prerequisites and Requirements

Before you start using SQL2Azure, make sure you have the following prerequisites:

  1. Python 3.6+ - SQL2Azure is compatible with Python 3.6 and above.
  2. SQL Server Database - You need access to an on-premises SQL Server and a valid connection string for it.
  3. Azure Subscription - You must have an Azure account and a Blob Storage account.
  4. Required Python Packages:
    • pyodbc: For connecting to SQL Server and executing queries.
    • azure-storage-blob: For interacting with Azure Blob Storage.

These dependencies are listed in the requirements.txt file.

To install them, use the following command:

pip install -r requirements.txt

Installation

You can install SQL2Azure by using pip or by cloning this repository.

Option 1: Install from PyPI (once published)

pip install SQL2Azure

Option 2: Install from GitHub

Clone the repository and install it using the following commands:

git clone https://github.com/yourusername/SQL2Azure.git
cd SQL2Azure
pip install .

Usage

Once the package is installed, you can start using it to load data from your SQL Server to Azure Blob Storage. The package has two main functions, one for loading data as JSON and another for CSV. Here’s how you can use them:

Loading Data to JSON

To load data from SQL Server to Azure Blob Storage in JSON format, use the load_to_json function. It accepts the following parameters:

  • sql_query: The SQL query you want to run on the database.
  • connection_string: The connection string to the SQL Server.
  • container_name: The name of the Azure Blob Storage container where the file will be uploaded.
  • folder_path: The folder path within the container where the file will be saved.
  • file_name: The name of the blob (file) in the Azure container.
  • azure_blob_url: The base URL for your Azure Blob Storage account.
  • sas_token: The SAS token for accessing Azure Blob Storage.
from sql2azure.json_loader import load_to_json

# SQL Query
query = "SELECT [UserID],[FirstName],[LastName],[Email],[Age] FROM [SampleDB].[dbo].[Users]"

# SQL Server connection string
sql_conn = r"Driver={SQL Server};Server=<SQL_SERVER_HOST>;Database=<SQL_SERVER_DB>;Trusted_Connection=yes;"

# Azure Blob Storage parameters
container = "<AZURE_CONTAINER_NAME>"
folder_json = "/data/json/"
json_blob = "user.json"
azure_blob_url = "<AZURE_BLOB_URL>"
sas_token = "<SAS_TOKEN>"

# Load to JSON
json_status = load_to_json(query, sql_conn, container, folder_json, json_blob, azure_blob_url, sas_token)
print("JSON Upload Status:", json_status)

Loading Data to CSV

To load data to CSV format, use the load_to_csv function. It has the same parameters as load_to_json, but the data will be saved in CSV format instead of JSON.

from sql2azure.csv_loader import load_to_csv

# SQL Query
query = "SELECT [UserID],[FirstName],[LastName],[Email],[Age] FROM [SampleDB].[dbo].[Users]"

# SQL Server connection string
sql_conn = r"Driver={SQL Server};Server=<SQL_SERVER_HOST>;Database=<SQL_SERVER_DB>;Trusted_Connection=yes;"

# Azure Blob Storage parameters
container = "<AZURE_CONTAINER_NAME>"
folder_csv = "/data/csv/"
csv_blob = "user.csv"
azure_blob_url = "<AZURE_BLOB_URL>"
sas_token = "<SAS_TOKEN>"

# Load to CSV
csv_status = load_to_csv(query, sql_conn, container, folder_csv, csv_blob, azure_blob_url, sas_token)
print("CSV Upload Status:", csv_status)

Sample Demo

Here’s a simple demo of how to use both functions:

from sql2azure import json_loader
from sql2azure import csv_loader

# SQL Query
query = "SELECT [UserID],[FirstName],[LastName],[Email],[Age] FROM [SampleDB].[dbo].[Users]"

# SQL Server connection string
sql_conn = r"Driver={SQL Server};Server=<SQL_SERVER_HOST>;Database=<SQL_SERVER_DB>;Trusted_Connection=yes;"

# Azure Blob Storage parameters
container = "<AZURE_CONTAINER_NAME>"
folder_json = "/data/json/"
folder_csv = "/data/csv/"
json_blob = "user.json"
csv_blob = "user.csv"
azure_blob_url = "<AZURE_BLOB_URL>"
sas_token = "<SAS_TOKEN>"

# Load to JSON
json_status = json_loader.load_to_json(query, sql_conn, container, folder_json, json_blob, azure_blob_url, sas_token)
print("JSON Upload Status:", json_status)

# Load to CSV
csv_status = csv_loader.load_to_csv(query, sql_conn, container, folder_csv, csv_blob, azure_blob_url, sas_token)
print("CSV Upload Status:", csv_status)

The status returned from both functions will provide the result of the operation. For a successful operation, it will contain:

  • status: "success"
  • rows_uploaded: The number of rows uploaded to the Azure container.

If an error occurs, the status will be "error", and an error message will be included.

Copyright

© 2024 Ajith D. All rights reserved.

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sql2azure-1.1.5.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

SQL2Azure-1.1.5-py3-none-any.whl (6.2 kB view details)

Uploaded Python 3

File details

Details for the file sql2azure-1.1.5.tar.gz.

File metadata

  • Download URL: sql2azure-1.1.5.tar.gz
  • Upload date:
  • Size: 5.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.3

File hashes

Hashes for sql2azure-1.1.5.tar.gz
Algorithm Hash digest
SHA256 d3dd648d0e4f02a26bfdfb726cf372ac19e447a9a9f8219d9d99bb089f308268
MD5 9deefa17678df586c78d076f1f63ab7a
BLAKE2b-256 d1b0f6683b79d74326dba0e0dc0c049cd74771a0f9ac5daa637faa63ab43e8c8

See more details on using hashes here.

File details

Details for the file SQL2Azure-1.1.5-py3-none-any.whl.

File metadata

  • Download URL: SQL2Azure-1.1.5-py3-none-any.whl
  • Upload date:
  • Size: 6.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.3

File hashes

Hashes for SQL2Azure-1.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 94eb6d234168edf68f080d3cb1ed7ba67d61dbe5c4c5cdc7c3f6aa0101a5baff
MD5 b2bd17c999769b169b518d23f8fb0214
BLAKE2b-256 b2654aab6ee4c19610ae3d6558e473a9d020841b63e09ffffe343a15dcbaa6c1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page