Skip to main content

Fix for the Pandas to_sql() dataframe method that fails when we try pushing more than 256 values.

Project description

Pandas to_sql() method fix for Databricks

Databricks version PyPI version MIT License

Fix for the Pandas to_sql() dataframe method that fails when we try pushing more than 255 values to a Databricks table.

Table of Contents

Installation

python -m pip install pandas-tosql-dbx-fix

Execution / Usage

Once the package in installed, you can use the code here to get started with the pandas-tosql-dbx-fix library in your code:

import os
import pandas_tosql_dbx_fix as pdx

# Use your own values for the following variables
server = "YOUR_DATABRICKS_SERVER_HOSTNAME"
hpath = "YOUR_DATABRICKS_HTTP_PATH"
catalog = "YOUR_CATALOG_NAME"
schema = "YOUR_SCHEMA_NAME"
# token = "YOUR_DATABRICKS_TOKEN" # if needed

table_name = "to_sql_table"
test_table_rows = 1000

df = pdx.create_test_dataframe(test_table_rows)

# You can also connect to Databricks using a token with the pdx.connect_to_dbx_pat() function,
# or by creating your own SQLAlchemy engine.
db_con = pdx.connect_to_dbx_oauth(
        server, hpath, catalog, schema
    )

# The function takes the same arguments as the to_sql() method in Pandas
pdx.to_sql_dbx(
            df,
            db_con,
            f"{catalog}.{schema}.{table_name}",
            if_exists="append",
        )

Background and core issue

The Pandas to_sql() function worked well with Databricks up until November 2023 with the 3.0.0 release of the Databricks SQL Connector for Python. This release introduced Native Parameters which replaced a value in a SQL query to a parameter. Here is an example of the same to_sql() command that writes one row into a table, and what the query looks like when it's sent to Databricks before and after implementing native parameters:

Before 3.0.0 and native parameters:

INSERT INTO table (col_1, col_2) VALUES (100, 250)

After 3.0.0 and native parameters:

INSERT INTO table (col_1, col_2) VALUES (:value1, :value2)


The issue with the 'Native Parameter' change is that these parameters are limited to 255 in a single query execution. Which ultimately means that the Pandas to_sql() function will fail when trying to insert more than 255 values into a Databricks table. Not 255 rows, but 255 values. This practically renders the to_sql()function unusable.

What this package does differently

The pandas-tosql-dbx-fix package isolates the Pandas code responsible for building and executing the SQL queries when to_sql() is called. The main difference is that instead of building the SQL query and sending it to Databricks, the package compiles the query using the Databricks dialect first, and then sends it out to a Databricks SQL warehouse. Compiling the query replaces the parameters (ex: :value1, :value2) with their real values. This removes parameters from the query entirely and bypasses the 255 parameter limit. You can find this here in the code.

I also changed the way the source dataframe gets broken up into chunks, since I started running into issues once a single SQL query tried to insert more than 950,000 values at once. I added some logic here that limits each SQL query to 900,000 values. Each chunk now contains n_cols * n_rows <= 900,000.

Contributing

To contribute to the development of pandas-tosql-dbx-fix, follow the steps below:

  1. Fork pandas-tosql-dbx-fix from https://github.com/beefupinetree/pandas-tosql-dbx-fix
  2. Create your feature branch (git checkout -b feature-new)
  3. Make your changes
  4. Commit your changes (git commit -am 'Add some new feature')
  5. Push to the branch (git push origin feature-new)
  6. Create a new pull request

License

pandas-tosql-dbx-fix is distributed under the MIT license. See LICENSE for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pandas_tosql_dbx_fix-1.0.1.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pandas_tosql_dbx_fix-1.0.1-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file pandas_tosql_dbx_fix-1.0.1.tar.gz.

File metadata

  • Download URL: pandas_tosql_dbx_fix-1.0.1.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pandas_tosql_dbx_fix-1.0.1.tar.gz
Algorithm Hash digest
SHA256 6001e84cac2bf54000180aeecea4b6a34d141bef4890587d6107ec3775db049d
MD5 97d4ee3e86ec05bdc8290629bb54bcab
BLAKE2b-256 54ab353c0ad1b18cf0dda9f8f73909b5c93484756e06ed9207809a3698ab7aa7

See more details on using hashes here.

Provenance

The following attestation bundles were made for pandas_tosql_dbx_fix-1.0.1.tar.gz:

Publisher: python-publish.yml on beefupinetree/pandas-tosql-dbx-fix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pandas_tosql_dbx_fix-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pandas_tosql_dbx_fix-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b3c1a4fa2842987a845f72fae888258fb6e0bdccae85a5a2541cbbc5dd4cc3d6
MD5 5ee607b4dfa4cb16c9d28667951e76f1
BLAKE2b-256 223bf8816f053697aaee5d6cd419e307ef49c0594cd60639146e8c76da03a808

See more details on using hashes here.

Provenance

The following attestation bundles were made for pandas_tosql_dbx_fix-1.0.1-py3-none-any.whl:

Publisher: python-publish.yml on beefupinetree/pandas-tosql-dbx-fix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page