Skip to main content

Fix for the Pandas to_sql() dataframe method that fails when we try pushing more than 256 values.

Project description

Pandas to_sql() method fix for Databricks

version PyPI version MIT License

Fix for the Pandas to_sql() dataframe method that fails when we try pushing more than 255 values to a Databricks table.

Table of Contents

Installation

python -m pip install pandas-tosql-dbx-fix

Execution / Usage

Once the package in installed, you can use the code here to get started with the pandas-tosql-dbx-fix library:

import os
import pandas_tosql_dbx_fix as pdx

# Use your own values for the following variables
server = "YOUR_DATABRICKS_SERVER_HOSTNAME"
hpath = "YOUR_DATABRICKS_HTTP_PATH"
catalog = "YOUR_CATALOG_NAME"
schema = "YOUR_SCHEMA_NAME"
# token = "YOUR_DATABRICKS_TOKEN" # if needed

table_name = "to_sql_table"
test_table_rows = 1000

df = pdx.create_test_dataframe(test_table_rows)

# You can also connect to Databricks using a token with the pdx.connect_to_dbx_pat() function,
# or by creating your own SQLAlchemy engine.
db_con = pdx.connect_to_dbx_oauth(
        server, hpath, catalog, schema
    )

# The function takes the same arguments as the to_sql() method in Pandas
pdx.to_sql_dbx(
            df,
            db_con,
            f"{catalog}.{schema}.{table_name}",
            if_exists="append",
        )

Background and core issue

The Pandas to_sql() function worked well with Databricks up until November 2023 with the 3.0.0 release of the Databricks SQL Connector for Python. This release introduced Native Parameters which replaced a value in a SQL query to a parameter. Here is an example of the same to_sql() command that writes one row into a table, and what the query looks like when it's sent to Databricks before and after implementing native parameters:

Before 3.0.0 and native parameters:

INSERT INTO table (col_1, col_2) VALUES (100, 250)

After 3.0.0 and native parameters:

INSERT INTO table (col_1, col_2) VALUES (:value1, :value2)


The issue with the 'Native Parameter' change is that these parameters are limited to 255 in a single query execution. Which ultimately means that the Pandas to_sql() function will fail when trying to insert more than 255 values into a Databricks table. Not 255 rows, but 255 values. This practically renders the to_sql()function unusable.

What this package does differently

The pandas-tosql-dbx-fix package isolates the Pandas code responsible for building and executing the SQL queries when to_sql() is called. The main difference is that instead of building the SQL query and sending it to Databricks, the package compiles the query using the Databricks dialect first, and then sends it out to a Databricks SQL warehouse. Compiling the query replaces the parameters (ex: :value1, :value2) with their real values. This removes parameters from the query entirely and bypasses the 255 parameter limit. You can find this here in the code.

I also changed the way the source dataframe gets broken up into chunks, since I started running into issues once a single SQL query tried to insert more than 950,000 values at once. I added some logic here that limits each SQL query to 900,000 values. Each chunk now contains n_cols * n_rows <= 900,000.

Contributing

To contribute to the development of pandas-tosql-dbx-fix, follow the steps below:

  1. Fork pandas-tosql-dbx-fix from https://github.com/beefupinetree/pandas-tosql-dbx-fix
  2. Create your feature branch (git checkout -b feature-new)
  3. Make your changes
  4. Commit your changes (git commit -am 'Add some new feature')
  5. Push to the branch (git push origin feature-new)
  6. Create a new pull request

License

pandas-tosql-dbx-fix is distributed under the MIT license. See LICENSE for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pandas_tosql_dbx_fix-1.0.2.tar.gz (5.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pandas_tosql_dbx_fix-1.0.2-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file pandas_tosql_dbx_fix-1.0.2.tar.gz.

File metadata

  • Download URL: pandas_tosql_dbx_fix-1.0.2.tar.gz
  • Upload date:
  • Size: 5.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pandas_tosql_dbx_fix-1.0.2.tar.gz
Algorithm Hash digest
SHA256 6f760f02e7c5078b4a055e84b3cabec94617527638fcb8059280a7481d414e02
MD5 697cc3a01f7a1ba00ecb5d7b48f6f867
BLAKE2b-256 6231b3b0345250617fcbf1c5063bedc8362f428bfe964dc1ebf82c065b70cc05

See more details on using hashes here.

Provenance

The following attestation bundles were made for pandas_tosql_dbx_fix-1.0.2.tar.gz:

Publisher: python-publish.yml on beefupinetree/pandas-tosql-dbx-fix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pandas_tosql_dbx_fix-1.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for pandas_tosql_dbx_fix-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6b59ad1cb2d111bbe7d57125fd8370db9de8fa360e28d8af56bb5521d32e1711
MD5 0ce92ef22b2d56e94950990de54589a7
BLAKE2b-256 81ae1b5ca0064baf6504e0abf2cdb2f48c0e4c559819076a69422a1fa8d82888

See more details on using hashes here.

Provenance

The following attestation bundles were made for pandas_tosql_dbx_fix-1.0.2-py3-none-any.whl:

Publisher: python-publish.yml on beefupinetree/pandas-tosql-dbx-fix

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page