Spark SQL framework for Databricks jobs
Project description
sparqlin
sparqlin is a Spark SQL framework designed to simplify job creation and management in Databricks environments.
It integrates with Spark SQL and PySpark for a streamlined development experience.
The framework was specifically created to empower data analysts who may not have deep development skills. It provides a streamlined approach to adopting standard software development life cycles, enabling analysts to focus on working with data without the need to master complex programming paradigms. By leveraging familiar tools like SQL scripts and YAML files, the framework simplifies tasks such as data configuration, transformation, and testing.
This enables teams to:
- Bridge the gap between data analysis and software engineering.
- Enhance collaboration and maintain clear development processes.
- Encourage reusable and maintainable data workflows, all while adhering to best practices.
Features
- Simplifies the creation of Spark SQL jobs for Databricks.
- Flexible integration with PySpark and Spark SQL.
- YAML-based configuration for job definitions.
- Built-in support for testing through
pytest. - Integrated with tools like
GitPythonand system monitoring viapsutil. - Works with Databricks Bundles.
Installation
You can install sparqlin directly from PyPI using pip.
pip install sparqlin
Requirements
sparqlin requires Python 3.11 or higher (Databricks Runtime 15.4 LTS). Ensure you have the following dependencies installed:
pyspark>=3.5.0pytestpyyamlpsutilgitpython
Getting Started
Example Usage of ETL framework
To use sparqlin for creating and running Spark SQL jobs in Databricks, follow these steps:
- Initialize a Project: Start by creating a structure for your project. For instance, define YAML configuration files for your Databricks jobs.
- Create Spark SQL transformations: You can create a directory and place .sql files with queries that will run on Databricks.
- Load and Run Jobs: Use the provided framework functionality to parse configurations and execute jobs efficiently in Databricks.
Here is a typical layout of an analytical project:
databricks_default_project/
|-- README.md
|-- sql/
| |-- query1.sql
| |-- query2.sql
| |-- ...
|-- tests/
| |-- __init__.py
| |-- test_query1.py
| |-- test_query2.py
| |-- ...
|-- databricks.yml
Below is an example configuration of jobs and tasks databricks_default_project.job.yml:
# The example job configuration for databricks_default_project.
resources:
jobs:
databricks_default_python_job:
trigger:
# Run this job every day, exactly one day from the last run;
# see https://docs.databricks.com/api/workspace/jobs/create#trigger
periodic:
interval: 1
unit: DAYS
email_notifications:
on_failure:
- some_email@some_domain.com
job_clusters:
- job_cluster_key: job_cluster
new_cluster:
spark_version: 15.4.x-scala2.12
node_type_id: i3.xlarge
autoscale:
min_workers: 1
max_workers: 1
tasks:
- task_key: query1_sparqlin
# existing_cluster_id: ${var.cluster_id} # Existing cluster can be used instead
job_cluster_key: job_cluster
libraries:
- pypi:
package: sparqlin==0.1.11 # Install your package via PyPI (or custom repository)
# - whl: ../dist/*.whl # Alternatively, you can upload the sparqlin wheel into Volume
python_wheel_task:
package_name: "sparqlin" # Package name as defined in your setup.py or pyproject.toml
entry_point: "sparqlin" # Entry point defined in the package
named_parameters:
sql-query-path: "${workspace.root_path}/files/sql/query1.sql"
table-name: "sparkdev.default.taxi_top_five"
- task_key: query2_sparqlin
depends_on:
- task_key: query1_sparqlin
# existing_cluster_id: ${var.cluster_id} # Existing cluster can be used instead
job_cluster_key: job_cluster
libraries:
- pypi:
package: sparqlin==0.1.11 # Install your package via PyPI (or custom repository)
# - whl: ../dist/*.whl # Alternatively, you can upload the sparqlin wheel into Volume
python_wheel_task:
package_name: "sparqlin" # Package name as defined in your setup.py or pyproject.toml
entry_point: "sparqlin" # Entry point defined in the package
named_parameters:
sql-query-path: "${workspace.root_path}/files/sql/query2.sql"
table-name: "sparkdev.default.taxi_count"
# You can also continue `tasks` to use other frameworks or task types that Databricks support
Example Spark SQL transformation (sql/query.sql)
SELECT * FROM samples.nyctaxi.trips LIMIT 5;
Example Usage of Testing framework
Test Parameterized Dataset Paths
This example tests loading datasets into Spark DataFrames from YAML configuration files. It uses pytest fixtures
to dynamically provide the datasets_path.
from sparqlin.testing.helpers import get_spark_dataframe
@pytest.mark.parametrize("datasets_path", ["tests/testing/datasets_test/datasets.yml"], indirect=True)
def test_base_test_config(spark_session, datasets_path):
# Load test table as DataFrame
test_table_df = get_spark_dataframe(spark_session, datasets_path, "testdb.test_table")
second_table_df = get_spark_dataframe(spark_session, datasets_path, "testdb.second_table")
# Validate record counts
assert test_table_df.count() == 3
assert second_table_df.count() == 2
Configuring and Testing Temporary Hive Tables
This example demonstrates how to use BaseTestConfig to register tables as temporary datasets in Spark and perform SQL operations.
from sparqlin.testing.base_test_config import BaseTestConfig
def test_hive_table_operations(hive_data_yaml, tmp_path_factory):
datasets_file, tmp_path = hive_data_yaml
# Initialize BaseTestConfig
config = BaseTestConfig(tmp_path_factory)
# Set datasets location
config.DATASETS_LOCATION = datasets_file
# Create Spark session
spark = config.create_spark_session()
# Register tables from YAML file
config.register_tables(spark)
# Verify table registration
test_table_df = spark.sql("SELECT * FROM testdb.test_table")
second_table_df = spark.sql("SELECT * FROM testdb.second_table")
assert test_table_df.count() == 3
# Perform join operation
joined_df = test_table_df.join(second_table_df, test_table_df.id == second_table_df.id)
joined_results = joined_df.select("name", "value").collect()
assert len(joined_results) == 2
assert any(row.name == "Alice" and row.value == 100 for row in joined_results)
Development Setup
To contribute or set up a local development environment for sparqlin, follow these steps:
- Clone the repository:
git clone https://gitlab.com/rokorolev/sparqlin.git cd sparqlin
- Install dependencies:
pip install -r requirements.txt
- Run the tests:
The framework uses
pytestfor testing. You can run the test suite as follows:pytest
Build the Package
- Install Build Tools
pip install setuptools wheel - Build the Package
rm -rf build dist *.egg-info python setup.py sdist bdist_wheel
Upload the Package to PyPi
- Install Twine
pip install twine - Generate token for PyPi account
- Upload the Package
twine upload dist/*
License
This project is licensed under the MIT License. See the LICENSE file for details.
Contributions
Contributions are welcome! Feel free to fork the repository, create a feature branch, and submit a pull request. Please ensure proper test coverage for new functionality.
Issues
If you encounter a bug or have a feature request, please open an issue on the project's GitLab repository.
Author
Developed and maintained by Roman Korolev.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sparqlin-0.1.16.tar.gz.
File metadata
- Download URL: sparqlin-0.1.16.tar.gz
- Upload date:
- Size: 20.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7c4a8fa4afeb4ef3172ffed037121e709945414c99656869045d502cbaf49bca
|
|
| MD5 |
e41e15729039531e35849101e80bd1be
|
|
| BLAKE2b-256 |
5e344cb357f0c70290f89c084f3a36bdace51a1a67c9d1e9353b23e5acbc995f
|
File details
Details for the file sparqlin-0.1.16-py3-none-any.whl.
File metadata
- Download URL: sparqlin-0.1.16-py3-none-any.whl
- Upload date:
- Size: 19.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
69890b967a26834e907cf7269e10eabaff325ab41e9624861ea9fab6f419b169
|
|
| MD5 |
5d2e0c64d82f4160bbf7cecdc29cb111
|
|
| BLAKE2b-256 |
64894dd29efaa7c1663b3b989d28dc99d0e6dc61020f6c4a723dbde13788a0e9
|