Skip to main content

Open-source PySpark toolkit with data sources for REST APIs and SPARQL endpoints.

Project description

spark-fuse

CI License

spark-fuse is an open-source toolkit for PySpark — providing utilities, data sources, and tools to fuse your data workflows across JSON-centric REST APIs and SPARQL endpoints.

Features

  • Data sources for REST APIs (JSON payloads with pagination/retry support) and SPARQL services.
  • SparkSession helpers with sensible defaults and environment detection (databricks/fabric/local heuristics retained for legacy jobs).
  • DataFrame utilities for previews, schema checks, and ready-made date/time dimensions (daily calendar attributes and clock buckets).
  • LLM-powered semantic column normalization that batches API calls and caches responses.
  • Typer-powered CLI: list data sources and preview datasets via the REST/SPARQL helpers.

Installation

  • Create a virtual environment (recommended)
    • macOS/Linux:
      • python3 -m venv .venv
      • source .venv/bin/activate
      • python -m pip install --upgrade pip
    • Windows (PowerShell):
      • python -m venv .venv
      • .\\.venv\\Scripts\\Activate.ps1
      • python -m pip install --upgrade pip
  • From source (dev): pip install -e ".[dev]"
  • From PyPI: pip install "spark-fuse>=0.3.2"

Quickstart

  1. Create a SparkSession with helpful defaults
from spark_fuse.spark import create_session
spark = create_session(app_name="spark-fuse-quickstart")
  1. Load paginated REST API responses
import json
from spark_fuse.io import (
    REST_API_CONFIG_OPTION,
    REST_API_FORMAT,
    build_rest_api_config,
    register_rest_data_source,
)

register_rest_data_source(spark)
config = build_rest_api_config(
    spark,
    "https://pokeapi.co/api/v2/pokemon",
    source_config={
        "request_type": "GET",  # switch to "POST" for endpoints that require a body
        "records_field": "results",
        "pagination": {"mode": "response", "field": "next", "max_pages": 2},
        "params": {"limit": 20},
    },
)
pokemon = (
    spark.read.format(REST_API_FORMAT)
    .option(REST_API_CONFIG_OPTION, json.dumps(config))
    .load()
)
pokemon.select("name").show(5)

Need to hit a POST endpoint? Set "request_type": "POST" and attach your payload with "request_body": {...} (defaults to JSON encoding—add "request_body_type": "data" for form bodies). Flip on "include_response_payload": True to add a response_payload column with the raw server JSON.

  1. Query a SPARQL endpoint
sparql_query = """
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>

SELECT ?pokemon ?pokemonLabel ?pokedexNumber WHERE {
  ?pokemon wdt:P31 wd:Q3966183 .
  ?pokemon wdt:P1685 ?pokedexNumber .
}
LIMIT 5
"""

from spark_fuse.io import (
    SPARQL_CONFIG_OPTION,
    SPARQL_DATA_SOURCE_NAME,
    build_sparql_config,
    register_sparql_data_source,
)

register_sparql_data_source(spark)
sparql_options = build_sparql_config(
    spark,
    "https://query.wikidata.org/sparql",
    source_config={
        "query": sparql_query,
        "request_type": "POST",
        "headers": {"User-Agent": "spark-fuse-demo/0.3 (contact@example.com)"},
    },
)
sparql_df = (
    spark.read.format(SPARQL_DATA_SOURCE_NAME)
    .option(SPARQL_CONFIG_OPTION, json.dumps(sparql_options))
    .load()
)
if sparql_df.rdd.isEmpty():
    print("Endpoint unavailable — adjust the query or check your network.")
else:
    sparql_df.show(5, truncate=False)
  1. Build date/time dimensions with rich attributes
from spark_fuse.utils.dataframe import create_date_dataframe, create_time_dataframe

date_dim = create_date_dataframe(spark, "2024-01-01", "2024-01-07")
time_dim = create_time_dataframe(spark, "00:00:00", "23:59:00", interval_seconds=60)

date_dim.select("date", "year", "week", "day_name").show()
time_dim.select("time", "hour", "minute").show(5)

Check out notebooks/demos/date_time_dimensions_demo.ipynb for an interactive walkthrough.

LLM-Powered Column Mapping

from spark_fuse.utils.transformations import map_column_with_llm

standard_values = ["Apple", "Banana", "Cherry"]
mapped_df = map_column_with_llm(
    df,
    column="fruit",
    target_values=standard_values,
    model="o4-mini",
    temperature=None,
)
mapped_df.select("fruit", "fruit_mapped").show()

Set dry_run=True to inspect how many rows already match without spending LLM tokens. Configure your OpenAI or Azure OpenAI credentials with the usual environment variables before running live mappings. Some provider models only accept their default sampling configuration—pass temperature=None to omit the parameter when needed. This helper ships with spark-fuse 0.2.0 and later.

CLI Usage

  • spark-fuse --help
  • spark-fuse datasources
  • spark-fuse read --format rest --path https://pokeapi.co/api/v2/pokemon --config rest.json --show 5

CI

  • GitHub Actions runs ruff and pytest for Python 3.9–3.11.

License

  • Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spark_fuse-1.0.2.tar.gz (36.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spark_fuse-1.0.2-py3-none-any.whl (45.9 kB view details)

Uploaded Python 3

File details

Details for the file spark_fuse-1.0.2.tar.gz.

File metadata

  • Download URL: spark_fuse-1.0.2.tar.gz
  • Upload date:
  • Size: 36.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spark_fuse-1.0.2.tar.gz
Algorithm Hash digest
SHA256 88d4a804b85c7a10b2253b784fe56ab223dd962e8a9b983f5967c3fddf33f75b
MD5 f128da5c7d74a843292e887c5a408eff
BLAKE2b-256 68fed849465f3aad17e4722b970e7c60e8cc110acda7442ff8b6d3644404ae98

See more details on using hashes here.

Provenance

The following attestation bundles were made for spark_fuse-1.0.2.tar.gz:

Publisher: publish.yml on kevinsames/spark-fuse

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file spark_fuse-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: spark_fuse-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 45.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for spark_fuse-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 27b0775187715018ad97c48372feed70a47b8fd815e00f2ec2c1e790bf351144
MD5 08d55371d0dfc3dc3f474379b9378cd3
BLAKE2b-256 d0da3bb933a42dd39bb43431ccd2dca3bebf36590f89bb43f9863bf6d47509a6

See more details on using hashes here.

Provenance

The following attestation bundles were made for spark_fuse-1.0.2-py3-none-any.whl:

Publisher: publish.yml on kevinsames/spark-fuse

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page