Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

Comparison with other projects

Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable for your needs:

  • DuckDB is an open source, in-process analytic database. Like DataFusion, it supports very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than as a library for building such database systems.

  • Polars is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL support, nor as many extension points.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeConfig()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

Executing SQL against DataFrame Libraries (Experimental)

How to install (from pip)

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.

The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.

Bootstrap (Conda):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# create the conda environment for dev
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
# activate the conda environment
conda activate datafusion-dev

Bootstrap (Pip):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt

The tests rely on test data in git submodules.

git submodule init
git submodule update

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest

Running & Installing pre-commit hooks

arrow-datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install, which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.

The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files

How to update dependencies

To change test dependencies, change the requirements.in and run

# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt

To update dependencies, run with -U

python -m piptools compile -U --generate-hashes -o requirements-310.txt

More details here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-31.0.0.tar.gz (102.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-31.0.0-cp38-abi3-win_amd64.whl (15.2 MB view details)

Uploaded CPython 3.8+Windows x86-64

datafusion-31.0.0-cp38-abi3-manylinux_2_28_aarch64.whl (16.0 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARM64

datafusion-31.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.6 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

datafusion-31.0.0-cp38-abi3-macosx_11_0_arm64.whl (13.1 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

datafusion-31.0.0-cp38-abi3-macosx_10_7_x86_64.whl (14.3 MB view details)

Uploaded CPython 3.8+macOS 10.7+ x86-64

File details

Details for the file datafusion-31.0.0.tar.gz.

File metadata

  • Download URL: datafusion-31.0.0.tar.gz
  • Upload date:
  • Size: 102.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for datafusion-31.0.0.tar.gz
Algorithm Hash digest
SHA256 cb8f10e03427e2133372e4485142470076009f79efd9f2c97226977cde93bbe8
MD5 5749603d5f73ce36b11031b21eb79b44
BLAKE2b-256 13feee6d71addc2ccf4cf1af6e709e2d4fa78ae840e424886ba2a3681353e6cf

See more details on using hashes here.

File details

Details for the file datafusion-31.0.0-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-31.0.0-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 15.2 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for datafusion-31.0.0-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 96b7091e5fdbcbfda172891ca7d43b6722df646f5a30b685f92776b799ecb260
MD5 75100e4cd92d21a42471913baf4facac
BLAKE2b-256 1b7fbf33afae5c95603f30f2b72b62f1b6b7fb9c3ef7b5976c85a97daa857781

See more details on using hashes here.

File details

Details for the file datafusion-31.0.0-cp38-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for datafusion-31.0.0-cp38-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 235d2419f828cfa596f785e6202e71df217dd26df933aa54bfa6395464c8167f
MD5 729fd223c123b0c4aa7e7c43bc445175
BLAKE2b-256 e22c5d5923f1ebcd4c69ed8643c089529d308cc5510181d237b7c95273d931c1

See more details on using hashes here.

File details

Details for the file datafusion-31.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-31.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 017dbd1df224160845a5b3388e476938226a3a6d713da41d4f359dbad2b28f6d
MD5 d1aefd17ab44b8975bf18d36422dd645
BLAKE2b-256 308a49cacf2c7e098af2480e5911f4436bc0dec62f83deda0ded34c49c57ed17

See more details on using hashes here.

File details

Details for the file datafusion-31.0.0-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-31.0.0-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 fb2bd966479f0154fd11046e20a6beb0787ae1af9f078e7519b0df523874ac74
MD5 35d8f9fe1a1f1126195c689672008a2a
BLAKE2b-256 00ed11ab2f079193983315973d5ae6cdeecd0ae3117c4f29b3f12f92ca2480a2

See more details on using hashes here.

File details

Details for the file datafusion-31.0.0-cp38-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-31.0.0-cp38-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 6738852b0b123f86d621a8c9a2039574e0ce974fd28145573aeffbc6f275933b
MD5 6f5104e7a468d4376278c785e6e4405c
BLAKE2b-256 6d61d1a24b05861a56e12ee79d05bc6470ecd8fb146dbc0aa0a8bf904672cbe8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page