Skip to main content

Sail Python library

Project description

Sail

Build Status Codecov PyPI Release Static Slack Badge

Sail is an open-source unified and distributed multimodal computation framework created by LakeSail.

Our mission is to unify batch processing, stream processing, and compute-intensive AI workloads. Sail is a compute engine that is:

  • Compatible with the Spark Connect protocol, supporting the Spark SQL and DataFrame API with no code rewrites required.
  • ~4x faster than Spark in benchmarks (up to 8x in specific workloads).
  • 94% cheaper on infrastructure costs.
  • 100% Rust-native with no JVM overhead, delivering memory safety, instant startup, and predictable performance.

🚀 Sail outperforms Spark, popular Spark accelerators, Databricks, and Snowflake on ClickBench.

Documentation

The documentation of the latest Sail version can be found here.

Installation

Quick Start

Sail is available as a Python package on PyPI. You can install it along with PySpark in your Python environment.

pip install pysail
pip install "pyspark[connect]"

Alternatively, you can install the lightweight client package pyspark-client since Spark 4.0. The pyspark-connect package, which is equivalent to pyspark[connect], is also available since Spark 4.0.

Advanced Use Cases

You can install Sail from source to optimize performance for your specific hardware architecture. The detailed Installation Guide walks you through this process step-by-step.

If you need to deploy Sail in production environments, the Deployment Guide provides comprehensive instructions for deploying Sail on Kubernetes clusters and other infrastructure configurations.

Getting Started

Starting the Sail Server

Option 1: Command Line Interface. You can start the local Sail server using the sail command.

sail spark server --port 50051

Option 2: Python API. You can start the local Sail server using the Python API.

from pysail.spark import SparkConnectServer

server = SparkConnectServer(port=50051)
server.start(background=False)

Option 3: Kubernetes. You can deploy Sail on Kubernetes and run Sail in cluster mode for distributed processing. Please refer to the Kubernetes Deployment Guide for instructions on building the Docker image and writing the Kubernetes manifest YAML file.

kubectl apply -f sail.yaml
kubectl -n sail port-forward service/sail-spark-server 50051:50051

Connecting to the Sail Server

Once you have a running Sail server, you can connect to it in PySpark. No changes are needed in your PySpark code!

from pyspark.sql import SparkSession

spark = SparkSession.builder.remote("sc://localhost:50051").getOrCreate()
spark.sql("SELECT 1 + 1").show()

Please refer to the Getting Started guide for further details.

Feature Highlights

Storage

Sail supports a variety of storage backends for reading and writing data. You can read more details in our Storage Guide.

Here are the storage options supported:

  • AWS S3
  • Cloudflare R2
  • Azure
  • Google Cloud Storage
  • Hugging Face
  • HDFS
  • File systems
  • HTTP/HTTPS
  • In-memory storage

Lakehouse Formats

Sail provides native support for modern lakehouse table formats, offering reliable storage layers with strong data management guarantees and ensuring interoperability with existing datasets.

Please refer to the following guides for the supported formats:

Catalog Providers

Sail supports multiple catalog providers, such as the Apache Iceberg REST Catalog and Unity Catalog. You can manage datasets as external tables and integrate with broader data-platform ecosystems.

For more details on usage and best practices, see the Catalog Guide.

Benchmark Results

Derived TPC-H results show that Sail outperforms Apache Spark in every query:

  • Execution Time: ~4× faster across diverse SQL workloads.
  • Hardware Cost: 94% lower with significantly lower peak memory usage and zero shuffle spill.
Metric Spark Sail
Total Query Time 387.36 s 102.75 s
Query Speed-Up Baseline 43% – 727%
Peak Memory Usage 54 GB 22 GB (1 s)
Disk Write (Shuffle Spill) > 110 GB 0 GB

These results come from a derived TPC-H benchmark (22 queries, scale factor 100, Parquet format) on AWS r8g.4xlarge instances.

Query Time Comparison

See the full analysis and graphs on our Benchmark Results page.

Contributing

Contributions are more than welcome!

Please submit GitHub issues for bug reports and feature requests. You are also welcome to ask questions in GitHub discussions.

Feel free to create a pull request if you would like to make a code change. You can refer to the Development Guide to get started.

Additionally, please join our Slack Community if you haven’t already!

Why Choose Sail?

When Spark was invented over 15 years ago, it was revolutionary. It redefined distributed data processing and powered ETL, machine learning, and analytics pipelines across industries.

But Spark’s JVM-based architecture now struggles to meet modern demands for performance and cloud efficiency:

  • Garbage collection pauses introduce latency spikes.
  • Serialization overhead slows data exchange between JVM and Python.
  • Heavy executors drive up cloud costs and complicate scaling.
  • Row-based processing performs poorly on analytical workloads and leaves hardware efficiency untapped.
  • Slow startup delays workloads, hurting interactive and on-demand use cases.

Sail solves these problems with a modern, Rust-native design.

Sail is Spark-compatible

Sail offers a drop-in replacement for Spark SQL and the Spark DataFrame API. Existing PySpark code works out of the box once you connect your Spark client session to Sail over the Spark Connect protocol.

  • Spark SQL Dialect Support. A custom Rust parser (built with parser combinators and Rust procedural macros) covers Spark SQL syntax with production-grade accuracy.
  • DataFrame API Support. Spark DataFrame operations run on Sail with identical semantics.
  • Python UDF, UDAF, UDWF, and UDTF Support. Python, Pandas, and Arrow UDFs all follow the same conventions as Spark.

Sail’s Advantages over Spark

  • Rust-Native Engine. No garbage collection pauses, no JVM memory tuning, and low memory footprint.
  • Columnar Format and Vectorized Execution. Built on top of Apache Arrow and Apache DataFusion, the columnar in-memory format and SIMD instructions unlock blazing-fast query execution.
  • Lightning-Fast Python UDFs. Python code runs inside Sail with zero serialization overhead as Arrow array pointers enable zero-copy data sharing.
  • Performant Data Shuffling. Workers exchange Arrow columnar data directly, minimizing shuffle costs for joins and aggregations.
  • Lightweight, Stateless Workers. Workers start in seconds, consume only a few megabytes of memory at idle, and scale elastically to cut cloud costs and simplify operations.
  • Concurrency and Memory Safety You Can Trust. Rust’s ownership model prevents null pointers, race conditions, and unsafe memory access for unmatched reliability.

Curious about how Sail stacks up against Spark? Explore our Why Sail? page. Ready to bring your existing workloads over? Our Migration Guide shows you how.

Further Reading

  • Architecture – Overview of Sail’s design for both local and cluster modes, and how it transitions seamlessly between them.
  • Query Planning – Detailed explanation of how Sail parses SQL and Spark relations, builds logical and physical plans, and handles execution for local and cluster modes.
  • SQL and DataFrame Features – Complete reference for Spark SQL and DataFrame API compatibility.
  • LakeSail Blog – Updates on Sail releases, benchmarks, and technical insights.

✨Using Sail? Tell us your story and get free merch!✨

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pysail-0.4.6.tar.gz (1.5 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pysail-0.4.6-pp39-pypy39_pp73-manylinux_2_24_aarch64.whl (46.5 MB view details)

Uploaded PyPymanylinux: glibc 2.24+ ARM64

pysail-0.4.6-cp38-abi3-win_amd64.whl (54.3 MB view details)

Uploaded CPython 3.8+Windows x86-64

pysail-0.4.6-cp38-abi3-manylinux_2_24_aarch64.whl (46.5 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.24+ ARM64

pysail-0.4.6-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (50.0 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

pysail-0.4.6-cp38-abi3-macosx_11_0_arm64.whl (44.8 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

pysail-0.4.6-cp38-abi3-macosx_10_12_x86_64.whl (48.8 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file pysail-0.4.6.tar.gz.

File metadata

  • Download URL: pysail-0.4.6.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.11.5

File hashes

Hashes for pysail-0.4.6.tar.gz
Algorithm Hash digest
SHA256 5f5c2d46a092256d04faa7f6c00e830a30f62fd9a4e1c8e0237984ba1c12bb6a
MD5 0e4033ea0daff7fedb3e543a04c6d52b
BLAKE2b-256 3c4c8056c0e57d98539f35a1afec070c41bb63d1a6237656af52b70f0f12131f

See more details on using hashes here.

File details

Details for the file pysail-0.4.6-pp39-pypy39_pp73-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for pysail-0.4.6-pp39-pypy39_pp73-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 1ef86d2413d64314ed36774f660712b70017db333e276e9ce7d55a40cfa826a9
MD5 ca43e259a1fcc631d4d4a868f93e6e21
BLAKE2b-256 0342e8297322640ffc9a683a2bd82aa90e314cb6e0359bdd80ce6dde7662c421

See more details on using hashes here.

File details

Details for the file pysail-0.4.6-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: pysail-0.4.6-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 54.3 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: maturin/1.11.5

File hashes

Hashes for pysail-0.4.6-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 2e792ad60814c1c6c67d8f89829aa85987b32f4f58571c9ce84891158b1d2174
MD5 7390ade7650a5d2ad0a4badeb7571f21
BLAKE2b-256 2ecd905b1e2f51cdb3c9a0bd36ab5af7e754a8d4f3d8712dcc73e28a0051a571

See more details on using hashes here.

File details

Details for the file pysail-0.4.6-cp38-abi3-manylinux_2_24_aarch64.whl.

File metadata

File hashes

Hashes for pysail-0.4.6-cp38-abi3-manylinux_2_24_aarch64.whl
Algorithm Hash digest
SHA256 e1ddcbebaecee11a3ba0b64e00beac0f07d0babf484468a31f9b3107a61ab268
MD5 bc8d3c62f55c42aaf6d19b9adc37067e
BLAKE2b-256 c6b5cc2c82a3071dffe5aa7b8be8208134e31019eef400916ccaf091577e463e

See more details on using hashes here.

File details

Details for the file pysail-0.4.6-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for pysail-0.4.6-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7ef5aab8913cd5b4f5491b872ef6739172f277f95b554b58648051c8e4e66682
MD5 9cdd9ed098013768034f2dd7d9d1500d
BLAKE2b-256 4bd27501af80e17693e54477d2570ed74f0bee2300186c5591000bcc90900cff

See more details on using hashes here.

File details

Details for the file pysail-0.4.6-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for pysail-0.4.6-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 b5868d032a3c9cee69b1b7ae1916171d3d2bc1e1d3d08af51685aa9de069a5dc
MD5 f1d6c5126336952610df0cd58e903f2b
BLAKE2b-256 f10dfa81abeccdc7367d848aeee433e3e2e6b6bfa06d80b88f10e764a3523b74

See more details on using hashes here.

File details

Details for the file pysail-0.4.6-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for pysail-0.4.6-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 5f545d00a34c4f0d2101fe38b1f276f14e4d935f79d4d257910bd059ed05dbfd
MD5 83b1513ca380405cf449a567b5174a83
BLAKE2b-256 55901e0f51cc03a32aa491b09158894a7640a664f1dfc6753b034eaa52f25e8d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page