Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:

  • Dask SQL uses DataFusion's Python bindings for SQL parsing, query planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.
  • DataFusion Ballista is a distributed SQL query engine that extends DataFusion's Python bindings for distributed use cases.
  • DataFusion Ray is another distributed query engine that uses DataFusion's Python bindings.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

For tips on tuning parallelism, see Maximizing CPU Usage in the configuration guide.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Registering a DataFrame as a View

You can use SessionContext's register_view method to convert a DataFrame into a view and register it with the context.

from datafusion import SessionContext, col, literal

# Create a DataFusion context
ctx = SessionContext()

# Create sample data
data = {"a": [1, 2, 3, 4, 5], "b": [10, 20, 30, 40, 50]}

# Create a DataFrame from the dictionary
df = ctx.from_pydict(data, "my_table")

# Filter the DataFrame (for example, keep rows where a > 2)
df_filtered = df.filter(col("a") > literal(2))

# Register the dataframe as a view with the context
ctx.register_view("view1", df_filtered)

# Now run a SQL query against the registered view
df_view = ctx.sql("SELECT * FROM view1")

# Collect the results
results = df_view.collect()

# Convert results to a list of dictionaries for display
result_dicts = [batch.to_pydict() for batch in results]

print(result_dicts)

This will output:

[{'a': [3, 4, 5], 'b': [30, 40, 50]}]

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeEnvBuilder()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

Extensions

For information about how to extend DataFusion Python, please see the extensions page of the online documentation.

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

How to install

uv

uv add datafusion

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin. The Maturin tools used in this workflow can be installed either via uv or pip. Both approaches should offer the same experience. It is recommended to use uv since it has significant performance improvements over pip.

Currently for protobuf support either protobuf or cmake must be installed.

Bootstrap (uv):

By default uv will attempt to build the datafusion python package. For our development we prefer to build manually. This means that when creating your virtual environment using uv sync you need to pass in the additional --no-install-package datafusion and for uv run commands the additional parameter --no-project

# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# cd to the repo root
cd datafusion-python/
# create the virtual environment
uv sync --dev --no-install-package datafusion
# activate the environment
source .venv/bin/activate

Bootstrap (pip):

# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# cd to the repo root
cd datafusion-python/
# prepare development environment (used to build wheel / install in development)
python3 -m venv .venv
# activate the venv
source .venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies
python -m pip install -r pyproject.toml

The tests rely on test data in git submodules.

git submodule update --init

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop --uv
python -m pytest

Alternatively if you are using uv you can do the following without needing to activate the virtual environment:

uv run --no-project maturin develop --uv
uv run --no-project pytest

To run the FFI tests within the examples folder, after you have built datafusion-python with the previous commands:

cd examples/datafusion-ffi-example
uv run --no-project maturin develop --uv
uv run --no-project pytest python/tests/_test_*py

Running & Installing pre-commit hooks

datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install, which will install the configurations in your DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.

The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files.

NOTE: the current pre-commit hooks require docker, and cmake. See note on protobuf above.

Running linters without using pre-commit

There are scripts in ci/scripts for running Rust and Python linters.

./ci/scripts/python_lint.sh
./ci/scripts/rust_clippy.sh
./ci/scripts/rust_fmt.sh
./ci/scripts/rust_toml_fmt.sh

Checking Upstream DataFusion Coverage

This project includes an AI agent skill for auditing which features from the upstream Apache DataFusion Rust library are not yet exposed in these Python bindings. This is useful when adding missing functions, auditing API coverage, or ensuring parity with upstream.

The skill accepts an optional area argument:

scalar functions
aggregate functions
window functions
dataframe
session context
ffi types
all

If no argument is provided, it defaults to checking all areas. The skill will fetch the upstream DataFusion documentation, compare it against the functions and methods exposed in this project, and produce a coverage report listing what is currently exposed and what is missing.

The skill definition lives in .ai/skills/check-upstream/SKILL.md and follows the Agent Skills open standard. It can be used by any AI coding agent that supports skill discovery, or followed manually.

How to update dependencies

To change test dependencies, change the pyproject.toml and run

uv sync --dev --no-install-package datafusion

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-53.0.0.tar.gz (224.3 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-53.0.0-cp310-abi3-win_amd64.whl (38.1 MB view details)

Uploaded CPython 3.10+Windows x86-64

datafusion-53.0.0-cp310-abi3-manylinux_2_28_x86_64.whl (38.1 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.28+ x86-64

datafusion-53.0.0-cp310-abi3-manylinux_2_28_aarch64.whl (35.6 MB view details)

Uploaded CPython 3.10+manylinux: glibc 2.28+ ARM64

datafusion-53.0.0-cp310-abi3-macosx_11_0_arm64.whl (32.7 MB view details)

Uploaded CPython 3.10+macOS 11.0+ ARM64

datafusion-53.0.0-cp310-abi3-macosx_10_12_x86_64.whl (35.8 MB view details)

Uploaded CPython 3.10+macOS 10.12+ x86-64

File details

Details for the file datafusion-53.0.0.tar.gz.

File metadata

  • Download URL: datafusion-53.0.0.tar.gz
  • Upload date:
  • Size: 224.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for datafusion-53.0.0.tar.gz
Algorithm Hash digest
SHA256 ba9a5ec06b5453fbd8710d6aeeb515a8bcac4b6c140e254409bb53a5f322ef22
MD5 56fd7d6e23366d0e5e6783528926751b
BLAKE2b-256 582b0f96f12b70839c93930c4e17d767fc32b6c77d548c78784128049e944701

See more details on using hashes here.

File details

Details for the file datafusion-53.0.0-cp310-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-53.0.0-cp310-abi3-win_amd64.whl
  • Upload date:
  • Size: 38.1 MB
  • Tags: CPython 3.10+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for datafusion-53.0.0-cp310-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 4c8410f5f659b926677be6c7d443bbc05d825c078c970b7d8cf977ebcf948314
MD5 a1e4a9772524ee5b5fd486ff253e18d9
BLAKE2b-256 4b1aea4831fc6aeefedbcf186c9f6a273d507b1787c03cbb905bded7e1149a6a

See more details on using hashes here.

File details

Details for the file datafusion-53.0.0-cp310-abi3-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-53.0.0-cp310-abi3-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 8fef0004f0161fcfc556c025a7201f9cc3169aa3adb97a86419ebb34182d9efb
MD5 c3a721170bd7b3c254a86449f267f91e
BLAKE2b-256 3480b9f4889209af02f8d14bccb0e6f0519c329b072bc4d2595025a1303f144c

See more details on using hashes here.

File details

Details for the file datafusion-53.0.0-cp310-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for datafusion-53.0.0-cp310-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 288a00a7ef03e2807a4667683f7560efd80d60ed1d41696ac15ca9ded14c8251
MD5 e7de3060f35b6a6578531826ecebd4a0
BLAKE2b-256 ae94636ab61ade98395daea6e733e225e9c7beef111c7c5b575ac851513e203c

See more details on using hashes here.

File details

Details for the file datafusion-53.0.0-cp310-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-53.0.0-cp310-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 ce186a8d2405afd67e11e2fb75715019f16b00d070b8d0da89d8aa61cc74c8b5
MD5 0de7522ff07fc380922c18634f1f854f
BLAKE2b-256 6e59beabe5301df3338d8206446cd624079e43bdad46e20377a6336017fb6ccf

See more details on using hashes here.

File details

Details for the file datafusion-53.0.0-cp310-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-53.0.0-cp310-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 a0bd1a98d736571321416dc4ed361a9d1225da1ec9f6c5fad818d75f547697a7
MD5 7a3e48cb297b415307cd59a8f170b512
BLAKE2b-256 af4c60e052813d81f1ffe3123ead013dbdd2cf961daa576cb9056cbb80228e6b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page