Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

Like pyspark, it allows you to build a plan through SQL or a DataFrame API against in-memory data, parquet or CSV files, run it in a multi-threaded environment, and obtain the result back in Python.

It also allows you to use UDFs and UDAFs for complex operations.

The major advantage of this library over other execution engines is that this library achieves zero-copy between Python and its execution engine: there is no cost in using UDFs, UDAFs, and collecting the results to Python apart from having to lock the GIL when running those operations.

Its query engine, DataFusion, is written in Rust, which makes strong assumptions about thread safety and lack of memory leaks.

Technically, zero-copy is achieved via the c data interface.

How to use it

Simple usage:

import datafusion
from datafusion import functions as f
from datafusion import col
import pyarrow

# create a context
ctx = datafusion.ExecutionContext()

# create a RecordBatch and a new DataFrame from it
batch = pyarrow.RecordBatch.from_arrays(
    [pyarrow.array([1, 2, 3]), pyarrow.array([4, 5, 6])],
    names=["a", "b"],
)
df = ctx.create_dataframe([[batch]])

# create a new statement
df = df.select(
    col("a") + col("b"),
    col("a") - col("b"),
)

# execute and collect the first (and only) batch
result = df.collect()[0]

assert result.column(0) == pyarrow.array([5, 7, 9])
assert result.column(1) == pyarrow.array([-3, -3, -3])

UDFs

from datafusion import udf

def is_null(array: pyarrow.Array) -> pyarrow.Array:
    return array.is_null()

is_null_arr = udf(is_null, [pyarrow.int64()], pyarrow.bool_(), 'stable')

df = df.select(is_null_arr(col("a")))

result = df.collect()

assert result.column(0) == pyarrow.array([False] * 3)

UDAF

import pyarrow
import pyarrow.compute
from datafusion import udaf, Accumulator


class MyAccumulator(Accumulator):
    """
    Interface of a user-defined accumulation.
    """
    def __init__(self):
        self._sum = pyarrow.scalar(0.0)

    def update(self, values: pyarrow.Array) -> None:
        # not nice since pyarrow scalars can't be summed yet. This breaks on `None`
        self._sum = pyarrow.scalar(self._sum.as_py() + pyarrow.compute.sum(values).as_py())

    def merge(self, states: pyarrow.Array) -> None:
        # not nice since pyarrow scalars can't be summed yet. This breaks on `None`
        self._sum = pyarrow.scalar(self._sum.as_py() + pyarrow.compute.sum(states).as_py())

    def state(self) -> pyarrow.Array:
        return pyarrow.array([self._sum.as_py()])

    def evaluate(self) -> pyarrow.Scalar:
        return self._sum


df = ctx.create_dataframe([[batch]])

my_udaf = udaf(MyAccumulator, pyarrow.float64(), pyarrow.float64(), [pyarrow.float64()], 'stable')

df = df.aggregate(
    [],
    [my_udaf(col("a"))]
)

result = df.collect()[0]

assert result.column(0) == pyarrow.array([6.0])

How to install (from pip)

pip install datafusion
# or
python -m pip install datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.5.2'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.

Bootstrap:

# fetch this repo
git clone git@github.com:datafusion-contrib/datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest

How to update dependencies

To change test dependencies, change the requirements.in and run

# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt

To update dependencies, run with -U

python -m piptools compile -U --generate-hashes -o requirements-310.txt

More details here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-0.5.2.tar.gz (81.2 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-0.5.2-cp36-abi3-win_amd64.whl (5.9 MB view details)

Uploaded CPython 3.6+Windows x86-64

datafusion-0.5.2-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (7.3 MB view details)

Uploaded CPython 3.6+manylinux: glibc 2.12+ x86-64

datafusion-0.5.2-cp36-abi3-macosx_11_0_arm64.whl (5.0 MB view details)

Uploaded CPython 3.6+macOS 11.0+ ARM64

datafusion-0.5.2-cp36-abi3-macosx_10_7_x86_64.whl (5.8 MB view details)

Uploaded CPython 3.6+macOS 10.7+ x86-64

File details

Details for the file datafusion-0.5.2.tar.gz.

File metadata

  • Download URL: datafusion-0.5.2.tar.gz
  • Upload date:
  • Size: 81.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.10.2

File hashes

Hashes for datafusion-0.5.2.tar.gz
Algorithm Hash digest
SHA256 2f10e3fc61367a8017907cc0a9e0aa18dc80fd04b8b3413a90147c37a9e40bab
MD5 2121c4f78caeebb775634bc9bb804b89
BLAKE2b-256 cbe292dafcf1ead3a83c68873edd78384b7fe3737284ee76c06808c63de8cf46

See more details on using hashes here.

File details

Details for the file datafusion-0.5.2-cp36-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-0.5.2-cp36-abi3-win_amd64.whl
  • Upload date:
  • Size: 5.9 MB
  • Tags: CPython 3.6+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.10.2

File hashes

Hashes for datafusion-0.5.2-cp36-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 e664456e76409def5cf18fcb2c2710b244690bb1e13057770a69dd77e9efa2a9
MD5 84f97196b3e2a8bc16a8452f275fc933
BLAKE2b-256 5ab77aa00401c4595d2b2cdab1a1da980c5eb2c83be06213313375b9f249a584

See more details on using hashes here.

File details

Details for the file datafusion-0.5.2-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-0.5.2-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 8b96d50651c73f50ecc59198158b9a773dd1a37c346fa262cfcb1988c13d1fcf
MD5 0bfdddcfea7c01570460486f7cf48ec5
BLAKE2b-256 a5b30fe47e08cb756f0fb98668edfea32df96472ee898638076c1f21ee244926

See more details on using hashes here.

File details

Details for the file datafusion-0.5.2-cp36-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-0.5.2-cp36-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 3a888707fcfdb4a4f7412f05a7c68e8d884db3bae1e5218916452cba07658e9a
MD5 9070ec1bf087174f0e39471f05c4fb2e
BLAKE2b-256 0a05dc66cb3557ee888c685aacf57b9c42be073062652e7ab983ac5e9203e8df

See more details on using hashes here.

File details

Details for the file datafusion-0.5.2-cp36-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-0.5.2-cp36-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 7bc19fbc01d42690b4698f1a8207b110a0e51bd8de2a5725fc86431b7c4ed8cd
MD5 cb48ffe74e481c13256dd2a449d782e2
BLAKE2b-256 b9a63f820ffc7d3770e52af5bd8f11281cbb11edeb5971e55727e56d3af06dd4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page