Skip to main content

Open-source PySpark toolkit with connectors and CLI for Azure Storage, Databricks, Microsoft Fabric Lakehouses, REST APIs, and SPARQL endpoints.

Project description

spark-fuse

CI License

spark-fuse is an open-source toolkit for PySpark — providing utilities, connectors, and tools to fuse your data workflows across Azure Storage (ADLS Gen2), Databricks, Microsoft Fabric Lakehouses (via OneLake/Delta), JSON-centric REST APIs, and SPARQL endpoints.

Features

  • Connectors for ADLS Gen2 (abfss://), Fabric OneLake (onelake:// or abfss://...onelake.dfs.fabric.microsoft.com/...), Databricks DBFS and catalog tables, REST APIs (JSON), and SPARQL services.
  • SparkSession helpers with sensible defaults and environment detection (Databricks/Fabric/local).
  • DataFrame utilities for previews, schema checks, and ready-made date/time dimensions (daily calendar attributes and clock buckets).
  • LLM-powered semantic column normalization that batches API calls and caches responses.
  • Typer-powered CLI: list connectors, preview datasets, register Fabric tables, and submit Databricks jobs.

Installation

  • Create a virtual environment (recommended)
    • macOS/Linux:
      • python3 -m venv .venv
      • source .venv/bin/activate
      • python -m pip install --upgrade pip
    • Windows (PowerShell):
      • python -m venv .venv
      • .\\.venv\\Scripts\\Activate.ps1
      • python -m pip install --upgrade pip
  • From source (dev): pip install -e ".[dev]"
  • From PyPI: pip install "spark-fuse>=0.3.2"

Quickstart

  1. Create a SparkSession with helpful defaults
from spark_fuse.spark import create_session
spark = create_session(app_name="spark-fuse-quickstart")
  1. Read a Delta table from ADLS or OneLake
from spark_fuse.io.azure_adls import ADLSGen2Connector

df = ADLSGen2Connector().read(spark, "abfss://container@account.dfs.core.windows.net/path/to/delta")
df.show(5)
  1. Load paginated REST API responses
from spark_fuse.io.rest_api import RestAPIReader

reader = RestAPIReader()
config = {
    "request_type": "GET",  # switch to "POST" for endpoints that require a body
    "records_field": "results",
    "pagination": {"mode": "response", "field": "next", "max_pages": 2},
    "params": {"limit": 20},
}
pokemon = reader.read(spark, "https://pokeapi.co/api/v2/pokemon", source_config=config)
pokemon.select("name").show(5)

Need to hit a POST endpoint? Set "request_type": "POST" and attach your payload with "request_body": {...} (defaults to JSON encoding—add "request_body_type": "data" for form bodies). Flip on "include_response_payload": True to add a response_payload column with the raw server JSON.

  1. Query a SPARQL endpoint
from spark_fuse.io.sparql import SPARQLReader

sparql_reader = SPARQLReader()
sparql_df = sparql_reader.read(
    spark,
    "https://query.wikidata.org/sparql",
    source_config={
        "query": """
        PREFIX wd: <http://www.wikidata.org/entity/>
        PREFIX wdt: <http://www.wikidata.org/prop/direct/>

        SELECT ?pokemon ?pokemonLabel ?pokedexNumber WHERE {
          ?pokemon wdt:P31 wd:Q3966183 .
          ?pokemon wdt:P1685 ?pokedexNumber .
        }
        LIMIT 5
        """,
        "request_type": "POST",
        "headers": {"User-Agent": "spark-fuse-demo/0.3 (contact@example.com)"},
    },
)
if sparql_df.rdd.isEmpty():
    print("Endpoint unavailable — adjust the query or check your network.")
else:
    sparql_df.show(5, truncate=False)
  1. Build date/time dimensions with rich attributes
from spark_fuse.utils.dataframe import create_date_dataframe, create_time_dataframe

date_dim = create_date_dataframe(spark, "2024-01-01", "2024-01-07")
time_dim = create_time_dataframe(spark, "00:00:00", "23:59:00", interval_seconds=60)

date_dim.select("date", "year", "week", "day_name").show()
time_dim.select("time", "hour", "minute").show(5)

Check out notebooks/demos/date_time_dimensions_demo.ipynb for an interactive walkthrough.

LLM-Powered Column Mapping

from spark_fuse.utils.transformations import map_column_with_llm

standard_values = ["Apple", "Banana", "Cherry"]
mapped_df = map_column_with_llm(
    df,
    column="fruit",
    target_values=standard_values,
    model="o4-mini",
    temperature=None,
)
mapped_df.select("fruit", "fruit_mapped").show()

Set dry_run=True to inspect how many rows already match without spending LLM tokens. Configure your OpenAI or Azure OpenAI credentials with the usual environment variables before running live mappings. Some provider models only accept their default sampling configuration—pass temperature=None to omit the parameter when needed. This helper ships with spark-fuse 0.2.0 and later.

CLI Usage

  • spark-fuse --help
  • spark-fuse connectors
  • spark-fuse read --path abfss://container@account.dfs.core.windows.net/path/to/delta --show 5
  • spark-fuse fabric-register --table lakehouse_table --path onelake://workspace/lakehouse/Tables/events
  • spark-fuse databricks-submit --json job.json

CI

  • GitHub Actions runs ruff and pytest for Python 3.9–3.11.

License

  • Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spark_fuse-1.0.0.tar.gz (39.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spark_fuse-1.0.0-py3-none-any.whl (53.0 kB view details)

Uploaded Python 3

File details

Details for the file spark_fuse-1.0.0.tar.gz.

File metadata

  • Download URL: spark_fuse-1.0.0.tar.gz
  • Upload date:
  • Size: 39.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spark_fuse-1.0.0.tar.gz
Algorithm Hash digest
SHA256 917da37fbe3d183bdca5b293d508455acdf76b1d72d3e4f4f2a4badf5e38ad04
MD5 edb4629476bc66b463a8c36c88c13b76
BLAKE2b-256 d5bcf1b9a3b918232877cb22907e610d5875691a0da47d5f7fa1031d1b709dc8

See more details on using hashes here.

Provenance

The following attestation bundles were made for spark_fuse-1.0.0.tar.gz:

Publisher: publish.yml on kevinsames/spark-fuse

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file spark_fuse-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: spark_fuse-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 53.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spark_fuse-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a4535396122f5cd29119c260df6a2b69298845532a81b3d20479f269f770710e
MD5 a16e052b2512fe28203faf9f269931a3
BLAKE2b-256 1ad378dba71b6bd058bd502092824359ca24cdeaeb4f40dc13ff9c31cc83d453

See more details on using hashes here.

Provenance

The following attestation bundles were made for spark_fuse-1.0.0-py3-none-any.whl:

Publisher: publish.yml on kevinsames/spark-fuse

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page