Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

Comparison with other projects

Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable for your needs:

  • DuckDB is an open source, in-process analytic database. Like DataFusion, it supports very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than as a library for building such database systems.

  • Polars is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL support, nor as many extension points.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

Executing SQL against DataFrame Libraries (Experimental)

How to install (from pip)

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.

The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.

Bootstrap (Conda):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# create the conda environment for dev
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
# activate the conda environment
conda activate datafusion-dev

Bootstrap (Pip):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt

The tests rely on test data in git submodules.

git submodule init
git submodule update

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest

Running & Installing pre-commit hooks

arrow-datafusion-python takes advantage of (pre-commit)[https://pre-commit.com/] to assist developers in with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keep PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit failing to perform the commit if an offending lint is found giving you the opportunity to make changes locally before pushing.

The pre-commit hooks can also be ran ad-hoc without installing them by simply running pre-commit run --all-files

How to update dependencies

To change test dependencies, change the requirements.in and run

# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt

To update dependencies, run with -U

python -m piptools compile -U --generate-hashes -o requirements-310.txt

More details here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-20.0.0.tar.gz (12.8 MB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-20.0.0-cp37-abi3-manylinux_2_34_x86_64.whl (13.1 MB view details)

Uploaded CPython 3.7+manylinux: glibc 2.34+ x86-64

datafusion-20.0.0-cp37-abi3-macosx_11_0_arm64.whl (11.5 MB view details)

Uploaded CPython 3.7+macOS 11.0+ ARM64

datafusion-20.0.0-cp37-abi3-macosx_10_7_x86_64.whl (14.6 MB view details)

Uploaded CPython 3.7+macOS 10.7+ x86-64

File details

Details for the file datafusion-20.0.0.tar.gz.

File metadata

  • Download URL: datafusion-20.0.0.tar.gz
  • Upload date:
  • Size: 12.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: maturin/0.14.15

File hashes

Hashes for datafusion-20.0.0.tar.gz
Algorithm Hash digest
SHA256 e3b5ad340188b804ffb3e4666fbc3580f6c715ba3971760e60d3023aa0fb71e0
MD5 315d6db589de9d466a276194f090aed3
BLAKE2b-256 476f651745f934a21c1a83eaf4e647c8abb8cf3e9a77160f024c8cb46e612948

See more details on using hashes here.

File details

Details for the file datafusion-20.0.0-cp37-abi3-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-20.0.0-cp37-abi3-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 d950193c58103d28bff13e88c7a331bd897334efbe490f745376195bf04a62ca
MD5 7a042831c12db33fed366c0c7aca5d1b
BLAKE2b-256 263ab333f7bba736258230b78c79631c03a6d65dc9b79a8cf3ad1571e8629b4b

See more details on using hashes here.

File details

Details for the file datafusion-20.0.0-cp37-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-20.0.0-cp37-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 6fe29e51af3b473db1268756f1bbae367367a66c99f438f2bd47a98fe4ae2d2d
MD5 d58298389e9043c51a5ccf0d4ba08059
BLAKE2b-256 d57fb1e72b33ae517082f698cdb7334bdc71cfc561d29af7af5526edbc59cf49

See more details on using hashes here.

File details

Details for the file datafusion-20.0.0-cp37-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-20.0.0-cp37-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 a529dcf69d1ed99e2dea312d7269c8c7c28330db88f197e73cba3d08a45712d9
MD5 4f925fd824e6c1f58d0a59b0416f1de2
BLAKE2b-256 9680cfe8c1d503375cceac55c0551694d622c362a92690313a2a1703110c4d89

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page