Skip to main content

Lightweight stateless SQL execution for Databricks with minimal dependencies

Project description

Databricks Labs LSQL

PyPI - Version build codecov lines of code

Lightweight execution of SQL queries through Databricks SDK for Python.

Installation

pip install databricks-labs-lsql

[back to top]

Executing SQL

Primary use-case of :py:meth:fetch_all and :py:meth:execute methods is oriented at executing SQL queries in a stateless manner straight away from Databricks SDK for Python, without requiring any external dependencies. Results are fetched in JSON format through presigned external links. This is perfect for serverless applications like AWS Lambda, Azure Functions, or any other containerised short-lived applications, where container startup time is faster with the smaller dependency set.

Applications, that need a more traditional SQL Python APIs with cursors, efficient data transfer of hundreds of megabytes or gigabytes of data serialized in Apache Arrow format, and low result fetching latency, should use the stateful Databricks SQL Connector for Python.

Constructor and the most of the methods do accept common parameters.

from databricks.sdk import WorkspaceClient
from databricks.labs.lsql.core import StatementExecutionExt
w = WorkspaceClient()
see = StatementExecutionExt(w)
for (pickup_zip, dropoff_zip) in see('SELECT pickup_zip, dropoff_zip FROM samples.nyctaxi.trips LIMIT 10'):
    print(f'pickup_zip={pickup_zip}, dropoff_zip={dropoff_zip}')

[back to top]

Iterating over results

Method fetch_all returns an iterator of objects, that resemble pyspark.sql.Row APIs, but full compatibility is not the goal of this implementation. Method accepts common parameters.

import os
from databricks.sdk import WorkspaceClient
from databricks.labs.lsql.core import StatementExecutionExt

results = []
w = WorkspaceClient()
see = StatementExecutionExt(w, warehouse_id=os.environ.get("TEST_DEFAULT_WAREHOUSE_ID"))
for pickup_zip, dropoff_zip in see.fetch_all("SELECT pickup_zip, dropoff_zip FROM samples.nyctaxi.trips LIMIT 10"):
    results.append((pickup_zip, dropoff_zip))

[back to top]

Executing without iterating

When you only need to execute the query and have no need to iterate over results, use the execute method, which accepts common parameters.

from databricks.sdk import WorkspaceClient
from databricks.labs.lsql.core import StatementExecutionExt

w = WorkspaceClient()
see = StatementExecutionExt(w)
see.execute("CREATE TABLE foo AS SELECT * FROM range(10)")

[back to top]

Fetching one record

Method fetch_one returns a single record from the result set. If the result set is empty, it returns None. If the result set contains more than one record, it raises ValueError.

from databricks.sdk import WorkspaceClient
from databricks.labs.lsql.core import StatementExecutionExt

w = WorkspaceClient()
see = StatementExecutionExt(w)
pickup_zip, dropoff_zip = see.fetch_one("SELECT pickup_zip, dropoff_zip FROM samples.nyctaxi.trips LIMIT 1")
print(f'pickup_zip={pickup_zip}, dropoff_zip={dropoff_zip}')

[back to top]

Fetching one value

Method fetch_value returns a single value from the result set. If the result set is empty, it returns None.

from databricks.sdk import WorkspaceClient
from databricks.labs.lsql.core import StatementExecutionExt

w = WorkspaceClient()
see = StatementExecutionExt(w)
count = see.fetch_value("SELECT COUNT(*) FROM samples.nyctaxi.trips")
print(f'count={count}')

[back to top]

Parameters

  • warehouse_id (str, optional) - Warehouse upon which to execute a statement. If not given, it will use the warehouse specified in the constructor or the first available warehouse that is not in the DELETED or DELETING state.
  • byte_limit (int, optional) - Applies the given byte limit to the statement's result size. Byte counts are based on internal representations and may not match measurable sizes in the JSON format.
  • catalog (str, optional) - Sets default catalog for statement execution, similar to USE CATALOG in SQL. If not given, it will use the default catalog or the catalog specified in the constructor.
  • schema (str, optional) - Sets default schema for statement execution, similar to USE SCHEMA in SQL. If not given, it will use the default schema or the schema specified in the constructor.
  • timeout (timedelta, optional) - Timeout after which the query is cancelled. If timeout is less than 50 seconds, it is handled on the server side. If the timeout is greater than 50 seconds, Databricks SDK for Python cancels the statement execution and throws TimeoutError. If not given, it will use the timeout specified in the constructor.

[back to top]

SQL backend abstraction

This framework allows for mapping with strongly-typed dataclasses between SQL and Python runtime. It handles the schema creation logic purely from Python datastructure.

SqlBackend is used to define the methods that are required to be implemented by any SQL backend that is used by the library. The methods defined in this class are used to execute SQL statements, fetch results from SQL statements, and save data to tables. Available backends are:

  • StatementExecutionBackend used for reading/writing records purely through REST API
  • DatabricksConnectBackend used for reading/writing records through Databricks Connect
  • RuntimeBackend used for execution within Databricks Runtime
  • MockBackend used for unit testing

Common methods are:

  • execute(str) - Execute a SQL statement and wait till it finishes
  • fetch(str) - Execute a SQL statement and iterate over all results
  • save_table(full_name: str, rows: Sequence[DataclassInstance], klass: Dataclass) - Save a sequence of dataclass instances to a table

[back to top]

Project Support

Please note that all projects in the /databrickslabs github account are provided for your exploration only, and are not formally supported by Databricks with Service Level Agreements (SLAs). They are provided AS-IS and we do not make any guarantees of any kind. Please do not submit a support ticket relating to any issues arising from the use of these projects.

Any issues discovered through the use of this project should be filed as GitHub Issues on the Repo. They will be reviewed as time permits, but there are no formal SLAs for support.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

databricks_labs_lsql-0.17.0.tar.gz (54.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

databricks_labs_lsql-0.17.0-py3-none-any.whl (49.3 kB view details)

Uploaded Python 3

File details

Details for the file databricks_labs_lsql-0.17.0.tar.gz.

File metadata

  • Download URL: databricks_labs_lsql-0.17.0.tar.gz
  • Upload date:
  • Size: 54.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for databricks_labs_lsql-0.17.0.tar.gz
Algorithm Hash digest
SHA256 227e62a7a03206b0a38c2f431537d804264a1737cbb82569e06c305fb536dd15
MD5 b67b522ac5c950738a54a8ff6633190a
BLAKE2b-256 71147e5a7a5e786c52867169d5fe963b0da2679c66efb7fb576b260e5eb682df

See more details on using hashes here.

Provenance

The following attestation bundles were made for databricks_labs_lsql-0.17.0.tar.gz:

Publisher: release.yml on databrickslabs/lsql

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file databricks_labs_lsql-0.17.0-py3-none-any.whl.

File metadata

File hashes

Hashes for databricks_labs_lsql-0.17.0-py3-none-any.whl
Algorithm Hash digest
SHA256 829c41388631ef4883dc11d5b9dcdc5010b057a274a42af6e9573eb84d5d5e37
MD5 fdbc77c9af4d0638bd8586b12c373c0c
BLAKE2b-256 2281b054caed099037ec50c167041ea7247fa71207ea640d43aadd5fe409f4db

See more details on using hashes here.

Provenance

The following attestation bundles were made for databricks_labs_lsql-0.17.0-py3-none-any.whl:

Publisher: release.yml on databrickslabs/lsql

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page