Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

Comparison with other projects

Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable for your needs:

  • DuckDB is an open source, in-process analytic database. Like DataFusion, it supports very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than as a library for building such database systems.

  • Polars is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL support, nor as many extension points.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeConfig()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

Executing SQL against DataFrame Libraries (Experimental)

How to install (from pip)

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.

The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.

Bootstrap (Conda):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# create the conda environment for dev
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
# activate the conda environment
conda activate datafusion-dev

Bootstrap (Pip):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt

The tests rely on test data in git submodules.

git submodule init
git submodule update

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest

Running & Installing pre-commit hooks

arrow-datafusion-python takes advantage of (pre-commit)[https://pre-commit.com/] to assist developers in with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keep PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit failing to perform the commit if an offending lint is found giving you the opportunity to make changes locally before pushing.

The pre-commit hooks can also be ran ad-hoc without installing them by simply running pre-commit run --all-files

How to update dependencies

To change test dependencies, change the requirements.in and run

# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt

To update dependencies, run with -U

python -m piptools compile -U --generate-hashes -o requirements-310.txt

More details here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-27.0.0.tar.gz (96.8 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-27.0.0-cp38-abi3-win_amd64.whl (15.6 MB view details)

Uploaded CPython 3.8+Windows x86-64

datafusion-27.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

datafusion-27.0.0-cp38-abi3-macosx_11_0_arm64.whl (13.3 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

datafusion-27.0.0-cp38-abi3-macosx_10_7_x86_64.whl (14.6 MB view details)

Uploaded CPython 3.8+macOS 10.7+ x86-64

File details

Details for the file datafusion-27.0.0.tar.gz.

File metadata

  • Download URL: datafusion-27.0.0.tar.gz
  • Upload date:
  • Size: 96.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for datafusion-27.0.0.tar.gz
Algorithm Hash digest
SHA256 e15c783ac473f82fa61479602e3a8f471b42362f8b7664dd84d132954e954e98
MD5 181de043a7ed5cceff5d25eefd91c202
BLAKE2b-256 dcd370ebc7ba64121778e2f190019c700af5e49c1c831853b6fba320b2a71733

See more details on using hashes here.

File details

Details for the file datafusion-27.0.0-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-27.0.0-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 15.6 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for datafusion-27.0.0-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 a497e7cb2ebed8d91d04f339db1a87cd89f2ad8c1a27ccfe490b7bc16896f818
MD5 d4682f763a3ed68f66b6d763cef1ade2
BLAKE2b-256 5d5245b97bafd443d8464a046f44ed868187e064983daba6660c7366b5b2c914

See more details on using hashes here.

File details

Details for the file datafusion-27.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-27.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 53fde0a267dbf5ceac3af458f091c759074ac84ffcd769d8893ffb385469710e
MD5 2f51fd59c06faf458504ef0ce4b8bac5
BLAKE2b-256 c69c4f2255e2aed83390101c48710fe6bcdd576e74489c2660df5f1c3e94d82c

See more details on using hashes here.

File details

Details for the file datafusion-27.0.0-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-27.0.0-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 195bef8c436323b2192c90bb24f2330aef2cdb9002e17a624afdba04c2a31fff
MD5 2be2fac641ee26e55c1e6ea34b1f78b8
BLAKE2b-256 99780ee5ecb7493991482fe8efd1c8873d268fb973c87480c0e1142b08e4aa3d

See more details on using hashes here.

File details

Details for the file datafusion-27.0.0-cp38-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-27.0.0-cp38-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 209f8cddc28605b36085b49ea18f725193e7816869321fe4cfebfc11210d1020
MD5 3b07107f535bc52b43f0274527ef25e1
BLAKE2b-256 0a35fd426f802691156d061317ca2e67f6c554ae0736ec0b1374044eabdb86f8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page