Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as an end-user tool as well as providing a foundation for building new systems.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

Comparison with other projects

Here is a comparison with similar projects that may help understand when DataFusion might be suitable and unsuitable for your needs:

  • DuckDB is an open source, in-process analytic database. Like DataFusion, it supports very fast execution, both from its custom file format and directly from Parquet files. Unlike DataFusion, it is written in C/C++ and it is primarily used directly by users as a serverless database and query system rather than as a library for building such database systems.

  • Polars is one of the fastest DataFrame libraries at the time of writing. Like DataFusion, it is also written in Rust and uses the Apache Arrow memory model, but unlike DataFusion it does not provide full SQL support, nor as many extension points.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeConfig()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

Executing SQL against DataFrame Libraries (Experimental)

How to install (from pip)

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.

The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.

Bootstrap (Conda):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# create the conda environment for dev
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
# activate the conda environment
conda activate datafusion-dev

Bootstrap (Pip):

# fetch this repo
git clone git@github.com:apache/arrow-datafusion-python.git
# prepare development environment (used to build wheel / install in development)
python3 -m venv venv
# activate the venv
source venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies (for Python 3.8+)
python -m pip install -r requirements-310.txt

The tests rely on test data in git submodules.

git submodule init
git submodule update

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop
python -m pytest

Running & Installing pre-commit hooks

arrow-datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install, which will install the configurations in your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.

The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files

How to update dependencies

To change test dependencies, change the requirements.in and run

# install pip-tools (this can be done only once), also consider running in venv
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt

To update dependencies, run with -U

python -m piptools compile -U --generate-hashes -o requirements-310.txt

More details here

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-33.0.0.tar.gz (106.7 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-33.0.0-cp38-abi3-win_amd64.whl (15.8 MB view details)

Uploaded CPython 3.8+Windows x86-64

datafusion-33.0.0-cp38-abi3-manylinux_2_28_aarch64.whl (16.5 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.28+ ARM64

datafusion-33.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.2 MB view details)

Uploaded CPython 3.8+manylinux: glibc 2.17+ x86-64

datafusion-33.0.0-cp38-abi3-macosx_11_0_arm64.whl (13.5 MB view details)

Uploaded CPython 3.8+macOS 11.0+ ARM64

datafusion-33.0.0-cp38-abi3-macosx_10_12_x86_64.whl (14.6 MB view details)

Uploaded CPython 3.8+macOS 10.12+ x86-64

File details

Details for the file datafusion-33.0.0.tar.gz.

File metadata

  • Download URL: datafusion-33.0.0.tar.gz
  • Upload date:
  • Size: 106.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for datafusion-33.0.0.tar.gz
Algorithm Hash digest
SHA256 60e42fc872e1936fb58871e73c863fe47dc7026a01e5e9ae06aa3b8a1586e149
MD5 cc877f25439d3738eb78083f5e0793f7
BLAKE2b-256 5cb52de94a0cf33a9c37e8f8ac9e2e8e4679bc500fa38facede860a50fb7720a

See more details on using hashes here.

File details

Details for the file datafusion-33.0.0-cp38-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-33.0.0-cp38-abi3-win_amd64.whl
  • Upload date:
  • Size: 15.8 MB
  • Tags: CPython 3.8+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for datafusion-33.0.0-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 5c3514d5feb8ce84151bacc89c96f7eb0ffce02aa46159fd30abe39de56eb994
MD5 775d30950d602ea820205d18830a5440
BLAKE2b-256 c6691035c86c3019075d5a3500029bdecdcb347c6df906dadab00fb93ef94251

See more details on using hashes here.

File details

Details for the file datafusion-33.0.0-cp38-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for datafusion-33.0.0-cp38-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 76db1e6dad256ec441a4634d782a3b9d0a744f72477856fa78278e56e433bcb4
MD5 38eba3d895ef62ab06c2eec6f235ffe1
BLAKE2b-256 f936b6c8740e78ec84d2dc6e983bcd51747f1ad6da1a14d43c7036d4dce682c5

See more details on using hashes here.

File details

Details for the file datafusion-33.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-33.0.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 c563ad09c22a326e715442721c83cda77b724d50f6bf42271db95648d7e73f2b
MD5 b93e871717f3964fbbdb3cbbf2555e06
BLAKE2b-256 8fcc7233f5b13f1c8f0879a30a546845ce2e8b22e73ea201ea5f0b40af98f06e

See more details on using hashes here.

File details

Details for the file datafusion-33.0.0-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-33.0.0-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 4e929bcdb730efa187f35c6545ff4e1a41048696746dbc81e3eeaf463b8787eb
MD5 28a7a9fa0de08fd052f5750415de905f
BLAKE2b-256 661135fe1522ebd1df570d936562d13dee17cb410315be356e6a3afec83cc745

See more details on using hashes here.

File details

Details for the file datafusion-33.0.0-cp38-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-33.0.0-cp38-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 183926f981dedc00b9dc0d591ff644092b399b105ff3037ad1b8343df0312402
MD5 9aa394b9b0501b4eaba65faa77f1bada
BLAKE2b-256 a13d5ce904afc3192c3f773f0a72486df81f28bf5349b181a8507a80b488d708

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page