Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

Comparison with other projects

Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable for your needs:

  • DuckDB is an open source, in-process analytic database. Like DataFusion, it supports very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than as a library for building such database systems.

  • Polars is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL support, nor as many extension points.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeConfig()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

Executing SQL against DataFrame Libraries (Experimental)

How to install (from pip)

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.

The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.

Bootstrap (Conda):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# create the conda environment for dev
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
# activate the conda environment
conda activate datafusion-dev

Bootstrap (Pip):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt

The tests rely on test data in git submodules.

git submodule init
git submodule update

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest

Running & Installing pre-commit hooks

arrow-datafusion-python takes advantage of (pre-commit)[https://pre-commit.com/] to assist developers in with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keep PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit failing to perform the commit if an offending lint is found giving you the opportunity to make changes locally before pushing.

The pre-commit hooks can also be ran ad-hoc without installing them by simply running pre-commit run --all-files

How to update dependencies

To change test dependencies, change the requirements.in and run

# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt

To update dependencies, run with -U

python -m piptools compile -U --generate-hashes -o requirements-310.txt

More details here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-25.0.0.tar.gz (94.5 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-25.0.0-cp37-abi3-win_amd64.whl (15.1 MB view details)

Uploaded CPython 3.7+Windows x86-64

datafusion-25.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.2 MB view details)

Uploaded CPython 3.7+manylinux: glibc 2.17+ x86-64

datafusion-25.0.0-cp37-abi3-macosx_11_0_arm64.whl (12.7 MB view details)

Uploaded CPython 3.7+macOS 11.0+ ARM64

datafusion-25.0.0-cp37-abi3-macosx_10_7_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.7+macOS 10.7+ x86-64

File details

Details for the file datafusion-25.0.0.tar.gz.

File metadata

  • Download URL: datafusion-25.0.0.tar.gz
  • Upload date:
  • Size: 94.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for datafusion-25.0.0.tar.gz
Algorithm Hash digest
SHA256 921f4eeae8253cd1b43f8a9b1e2562a0a6db88a4c02f8d1545a6cf25f3b6cdec
MD5 aa85b2827d7edf4d614806377cce4bdf
BLAKE2b-256 ee5a06ae52a96309428e6b09be0177e5b02bbb936215154efb47ddaff57f42b7

See more details on using hashes here.

File details

Details for the file datafusion-25.0.0-cp37-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-25.0.0-cp37-abi3-win_amd64.whl
  • Upload date:
  • Size: 15.1 MB
  • Tags: CPython 3.7+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.6

File hashes

Hashes for datafusion-25.0.0-cp37-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 3db0604ade63bfb107c5647f35eb06ad751b22044725a07537e05da43b10231a
MD5 4eb2579eb4401e00ec918abd495e263d
BLAKE2b-256 1c775d6c8cf414bf33e83743c8ebf60e2312f2a89fda877ae11c9832fcfb5d8b

See more details on using hashes here.

File details

Details for the file datafusion-25.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-25.0.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f602e409938a49c232c14f9d2bef89350998695ece16932da5d18edd74039dab
MD5 149b0e7e9bb073eceaffff43c2ebcce7
BLAKE2b-256 e23a045e38a7ac0698a898a2d68b9e05db269b40a980be2d0c79287e9b5c074d

See more details on using hashes here.

File details

Details for the file datafusion-25.0.0-cp37-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-25.0.0-cp37-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 eefe9915c97b891a0af2a6efa435ba17a7dfdcfdfc105dd528a0ae722ca06ba8
MD5 96f75c78f037af2539125b947fc7fd7c
BLAKE2b-256 b30bfd983d26ac35a2fac662f7a7e6d96c07212793e4ef7473e0c17b1751841a

See more details on using hashes here.

File details

Details for the file datafusion-25.0.0-cp37-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-25.0.0-cp37-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 2d18d938a7d7d344edcc688fa3651f8b490525e41bb50ca7d58a26684959e0d5
MD5 d1b74883ddceff798d5e463374291992
BLAKE2b-256 2fbf19ee535b6e4654bd33d0e40be01fd19c2ba882a694ab7880b9ed790efaf4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page