Skip to main content

Distributed AI inference framework for PySpark

Project description

spark-infer-ai

Distributed AI inference for PySpark DataFrames.

Tests PyPI version Python 3.10+ License

spark-ai brings model-powered text processing directly into Spark transformations using a simple API.

It is designed for portability and works anywhere Spark runs: local development, EMR, Dataproc, Kubernetes, or on-prem clusters.

Features

  • Spark-native sentiment analysis API
  • Vectorized execution with pandas_udf for better throughput than row-wise Python UDFs
  • Hugging Face Transformers backend
  • Null-safe text handling for production pipelines
  • Clean package structure for extension with additional AI tasks/backends

Installation

pip install spark-infer-ai

Requirements

  • Python 3.10+
  • Apache Spark 3.5+
  • Java runtime compatible with your Spark distribution

Core dependencies are installed automatically:

  • pyspark
  • pandas
  • pyarrow
  • transformers
  • torch

Quick Start

from pyspark.sql import SparkSession
from spark_ai import AI

spark = (
    SparkSession.builder
    .appName("spark-ai-demo")
    .config("spark.sql.execution.arrow.pyspark.enabled", "true")
    .getOrCreate()
)

df = spark.createDataFrame(
    [
        ("I love this product!",),
        ("This is the worst experience ever.",),
    ],
    ["review"],
)

ai = AI()
result = df.withColumn("sentiment", ai.sentiment("review"))
result.show(truncate=False)

Expected sentiment labels are typically POSITIVE / NEGATIVE (model-dependent).

API

AI

Primary interface for DataFrame AI transformations.

AI.sentiment(column_name: str)

Applies sentiment analysis to a text column and returns a Spark Column.

result = df.withColumn("sentiment", ai.sentiment("review"))

Performance Notes

spark-ai uses a vectorized Pandas UDF and batched Hugging Face inference internally.

For best performance in production:

  • Enable Arrow:
    • spark.sql.execution.arrow.pyspark.enabled=true
  • Tune Spark partitions to match your cluster resources
  • Tune batch_size for your hardware, or enable auto_tune_batch_size=True
  • Run benchmarks on representative text lengths and data sizes

Model-loading behavior:

  • Spark may run multiple Python workers per executor
  • Each Python worker keeps its own singleton model instance
  • That means model reuse is per worker process, not globally shared across all workers

You can use the included benchmark script:

python examples/benchmark_sentiment.py

Example benchmark output:

rows=20000
partitions=8
elapsed_seconds=6.383
rows_per_second=3133.1

Logging and Runtime Warnings

Common Spark startup warnings like:

  • NativeCodeLoader: Unable to load native-hadoop library...
  • JDK incubator module notices

are typically informational in local environments and do not indicate a failure.

Development

Clone and install in editable mode:

pip install -e ".[dev]"

Run tests:

pytest -q

Project Structure

src/spark_ai/
  ai.py                  # Public API
  config.py              # Central configuration
  udf/sentiment_udf.py   # Vectorized Spark UDF
  backends/              # Inference backend implementations
tests/unit/              # Unit tests
examples/                # Usage and benchmark scripts

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spark_infer_ai-0.1.2.tar.gz (12.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spark_infer_ai-0.1.2-py3-none-any.whl (12.3 kB view details)

Uploaded Python 3

File details

Details for the file spark_infer_ai-0.1.2.tar.gz.

File metadata

  • Download URL: spark_infer_ai-0.1.2.tar.gz
  • Upload date:
  • Size: 12.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for spark_infer_ai-0.1.2.tar.gz
Algorithm Hash digest
SHA256 2dd8bb01fa80d48b7c2341297ac9b40721b0ed98f79df9c1bc9f072836836f9a
MD5 dd9270ff7df07f07429ee93beb6a5603
BLAKE2b-256 32cb7025c7a4cb26754617d6e38b1a270463f04d539c953b2771e7f44e7056c3

See more details on using hashes here.

File details

Details for the file spark_infer_ai-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: spark_infer_ai-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 12.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for spark_infer_ai-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 492c1ec04c9277f20ebb2ecbdab6ad27f4483aa8d3eef2e46fa8f8813d838c7b
MD5 1d1da7e48c31a1886ff2b7630bbd26f8
BLAKE2b-256 d32c0e64db80dd8042c3873b257acd67df39367231cfd12cbb5db8486a9ccb97

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page