Open-source PySpark toolkit with data sources for REST APIs and SPARQL endpoints.
Project description
spark-fuse
spark-fuse is an open-source toolkit for PySpark — providing utilities, data sources, and tools to fuse your data workflows across JSON-centric REST APIs and SPARQL endpoints.
Features
- Data sources for REST APIs (JSON payloads with pagination/retry support), SPARQL services, and Qdrant collections (read/write).
- SparkSession helpers with sensible defaults and environment detection (databricks/fabric/local heuristics retained for legacy jobs).
- DataFrame utilities for previews, schema checks, and ready-made date/time dimensions (daily calendar attributes and clock buckets).
- LLM-powered semantic column normalization and LangChain-backed embedding generation (with optional text splitters) that batch work to limit API calls.
- Similarity partitioning toolkit with modular embedding preparation, clustering, and representative selection utilities.
- Change-tracking helpers to write current-only or history-preserving datasets with concise options.
- Typer-powered CLI: list data sources and preview datasets via the REST/SPARQL helpers.
Installation
- Create a virtual environment (recommended)
- macOS/Linux:
python3 -m venv .venvsource .venv/bin/activatepython -m pip install --upgrade pip
- Windows (PowerShell):
python -m venv .venv.\\.venv\\Scripts\\Activate.ps1python -m pip install --upgrade pip
- macOS/Linux:
- From source (dev):
pip install -e ".[dev]" - From PyPI:
pip install "spark-fuse>=1.0.2"
Quickstart
- Create a SparkSession with helpful defaults
from spark_fuse.spark import create_session
spark = create_session(app_name="spark-fuse-quickstart")
- Load paginated REST API responses
import json
from spark_fuse.io import (
REST_API_CONFIG_OPTION,
REST_API_FORMAT,
build_rest_api_config,
register_rest_data_source,
)
register_rest_data_source(spark)
config = build_rest_api_config(
spark,
"https://pokeapi.co/api/v2/pokemon",
source_config={
"request_type": "GET", # switch to "POST" for endpoints that require a body
"records_field": "results",
"pagination": {"mode": "response", "field": "next", "max_pages": 2},
"params": {"limit": 20},
},
)
pokemon = (
spark.read.format(REST_API_FORMAT)
.option(REST_API_CONFIG_OPTION, json.dumps(config))
.load()
)
pokemon.select("name").show(5)
Need to hit a POST endpoint? Set "request_type": "POST" and attach your payload with
"request_body": {...} (defaults to JSON encoding—add "request_body_type": "data" for form bodies).
Flip on "include_response_payload": True to add a response_payload column with the raw server JSON.
- Query a SPARQL endpoint
sparql_query = """
PREFIX wd: <http://www.wikidata.org/entity/>
PREFIX wdt: <http://www.wikidata.org/prop/direct/>
SELECT ?pokemon ?pokemonLabel ?pokedexNumber WHERE {
?pokemon wdt:P31 wd:Q3966183 .
?pokemon wdt:P1685 ?pokedexNumber .
}
LIMIT 5
"""
from spark_fuse.io import (
SPARQL_CONFIG_OPTION,
SPARQL_DATA_SOURCE_NAME,
build_sparql_config,
register_sparql_data_source,
)
register_sparql_data_source(spark)
sparql_options = build_sparql_config(
spark,
"https://query.wikidata.org/sparql",
source_config={
"query": sparql_query,
"request_type": "POST",
"headers": {"User-Agent": "spark-fuse-demo/1.0 (contact@example.com)"},
},
)
sparql_df = (
spark.read.format(SPARQL_DATA_SOURCE_NAME)
.option(SPARQL_CONFIG_OPTION, json.dumps(sparql_options))
.load()
)
if sparql_df.rdd.isEmpty():
print("Endpoint unavailable — adjust the query or check your network.")
else:
sparql_df.show(5, truncate=False)
- Write to a Qdrant collection
from spark_fuse.io import (
QDRANT_CONFIG_OPTION,
QDRANT_FORMAT,
build_qdrant_write_config,
register_qdrant_data_source,
)
register_qdrant_data_source(spark)
write_cfg = build_qdrant_write_config(
"http://localhost:6333",
collection="pokemon",
id_field="id",
vector_field="embedding",
payload_fields=["name", "type"],
)
df.write.format(QDRANT_FORMAT).option(QDRANT_CONFIG_OPTION, json.dumps(write_cfg)).save()
- Generate embeddings with LangChain (optionally split text)
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter
from spark_fuse.utils.llm import with_langchain_embeddings
splitter = RecursiveCharacterTextSplitter(chunk_size=256, chunk_overlap=32)
embedded = with_langchain_embeddings(
df,
input_col="text",
embeddings=lambda: OpenAIEmbeddings(model="text-embedding-3-small"),
text_splitter=splitter,
output_col="embedding",
aggregation="mean",
batch_size=16,
)
embedded.select("text", "embedding").show(3, truncate=False)
Pass a factory (lambda: OpenAIEmbeddings(...)) when the client cannot be pickled or needs executor-local setup. Provide a LangChain text splitter to chunk long documents before embedding; chunk vectors are combined with the chosen aggregation strategy (mean or first). Install langchain-core, langchain-openai, and langchain-text-splitters to use this helper.
- Build date/time dimensions with rich attributes
from spark_fuse.utils.dataframe import create_date_dataframe, create_time_dataframe
date_dim = create_date_dataframe(spark, "2024-01-01", "2024-01-07")
time_dim = create_time_dataframe(spark, "00:00:00", "23:59:00", interval_seconds=60)
date_dim.select("date", "year", "week", "day_name").show()
time_dim.select("time", "hour", "minute").show(5)
Check out notebooks/demos/date_time_dimensions_demo.ipynb for an interactive walkthrough.
- Partition embeddings and pick representatives
from spark_fuse.similarity import (
CosineSimilarity,
IdentityEmbeddingGenerator,
KMeansPartitioner,
MaxColumnChoice,
SimilarityPipeline,
)
pipeline = SimilarityPipeline(
embedding_generator=IdentityEmbeddingGenerator(input_col="embedding"),
partitioner=KMeansPartitioner(k=3, seed=7),
similarity_metric=CosineSimilarity(embedding_col="embedding"),
choice_function=MaxColumnChoice(column="score"),
)
clustered = pipeline.run(df)
representatives = pipeline.select_representatives(clustered)
See docs/guides/similarity_partitioning_demo.md for a walkthrough and notebooks/demos/similarity_pipeline_demo.ipynb for an
interactive companion.
LLM-Powered Column Mapping
from spark_fuse.utils.llm import map_column_with_llm
standard_values = ["Apple", "Banana", "Cherry"]
mapped_df = map_column_with_llm(
df,
column="fruit",
target_values=standard_values,
model="o4-mini",
temperature=None,
)
mapped_df.select("fruit", "fruit_mapped").show()
Set dry_run=True to inspect how many rows already match without spending LLM tokens. Configure your OpenAI or Azure OpenAI credentials with the usual environment variables before running live mappings. Some provider models only accept their default sampling configuration—pass temperature=None to omit the parameter when needed. The helper is available across spark-fuse 0.2.0 and later, including the 1.0.x series.
CLI Usage
spark-fuse --helpspark-fuse datasourcesspark-fuse read --format rest --path https://pokeapi.co/api/v2/pokemon --config rest.json --show 5
CI
- GitHub Actions runs ruff and pytest for Python 3.9–3.11.
License
- Apache 2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spark_fuse-1.2.0.tar.gz.
File metadata
- Download URL: spark_fuse-1.2.0.tar.gz
- Upload date:
- Size: 50.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c090f034160a6979ac06825ff15a88b437261829b45ae646741ac0ea5c65f515
|
|
| MD5 |
4d6a7c70eb5c7ea06244d52aca9da42f
|
|
| BLAKE2b-256 |
3da1734df6c82a655b817303fb1e04e6177734dd2f65e86cce356faa3a8b119a
|
Provenance
The following attestation bundles were made for spark_fuse-1.2.0.tar.gz:
Publisher:
publish.yml on kevinsames/spark-fuse
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
spark_fuse-1.2.0.tar.gz -
Subject digest:
c090f034160a6979ac06825ff15a88b437261829b45ae646741ac0ea5c65f515 - Sigstore transparency entry: 984914577
- Sigstore integration time:
-
Permalink:
kevinsames/spark-fuse@4750758487ba5229cd17a96986459d3ee13d413c -
Branch / Tag:
refs/tags/v.1.2.0 - Owner: https://github.com/kevinsames
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4750758487ba5229cd17a96986459d3ee13d413c -
Trigger Event:
release
-
Statement type:
File details
Details for the file spark_fuse-1.2.0-py3-none-any.whl.
File metadata
- Download URL: spark_fuse-1.2.0-py3-none-any.whl
- Upload date:
- Size: 65.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5332266b670604d99b61f61767cc0c4a620f7f6c23780c8b8d55e357a507d1b8
|
|
| MD5 |
0df7b562dc82f6409f57c29c2eef56dd
|
|
| BLAKE2b-256 |
0504b9ec38d48700e3485c8c621c45eae8a98e09b1da5dfc32f342676da23c19
|
Provenance
The following attestation bundles were made for spark_fuse-1.2.0-py3-none-any.whl:
Publisher:
publish.yml on kevinsames/spark-fuse
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
spark_fuse-1.2.0-py3-none-any.whl -
Subject digest:
5332266b670604d99b61f61767cc0c4a620f7f6c23780c8b8d55e357a507d1b8 - Sigstore transparency entry: 984914580
- Sigstore integration time:
-
Permalink:
kevinsames/spark-fuse@4750758487ba5229cd17a96986459d3ee13d413c -
Branch / Tag:
refs/tags/v.1.2.0 - Owner: https://github.com/kevinsames
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@4750758487ba5229cd17a96986459d3ee13d413c -
Trigger Event:
release
-
Statement type: