Skip to main content

Build and run queries against data

Project description

DataFusion in Python

Python test Python Release Build

This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.

DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:

  • Dask SQL uses DataFusion's Python bindings for SQL parsing, query planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.
  • DataFusion Ballista is a distributed SQL query engine that extends DataFusion's Python bindings for distributed use cases.
  • DataFusion Ray is another distributed query engine that uses DataFusion's Python bindings.

Features

  • Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
  • Queries are optimized using DataFusion's query optimizer.
  • Execute user-defined Python code from SQL.
  • Exchange data with Pandas and other DataFrame libraries that support PyArrow.
  • Serialize and deserialize query plans in Substrait format.
  • Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.

For tips on tuning parallelism, see Maximizing CPU Usage in the configuration guide.

Example Usage

The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results in a Pandas DataFrame, and then plotting a chart.

The Parquet file used in this example can be downloaded from the following page:

from datafusion import SessionContext

# Create a DataFusion context
ctx = SessionContext()

# Register table with context
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')

# Execute SQL
df = ctx.sql("select passenger_count, count(*) "
             "from taxi "
             "where passenger_count is not null "
             "group by passenger_count "
             "order by passenger_count")

# convert to Pandas
pandas_df = df.to_pandas()

# create a chart
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')

This produces the following chart:

Chart

Registering a DataFrame as a View

You can use SessionContext's register_view method to convert a DataFrame into a view and register it with the context.

from datafusion import SessionContext, col, literal

# Create a DataFusion context
ctx = SessionContext()

# Create sample data
data = {"a": [1, 2, 3, 4, 5], "b": [10, 20, 30, 40, 50]}

# Create a DataFrame from the dictionary
df = ctx.from_pydict(data, "my_table")

# Filter the DataFrame (for example, keep rows where a > 2)
df_filtered = df.filter(col("a") > literal(2))

# Register the dataframe as a view with the context
ctx.register_view("view1", df_filtered)

# Now run a SQL query against the registered view
df_view = ctx.sql("SELECT * FROM view1")

# Collect the results
results = df_view.collect()

# Convert results to a list of dictionaries for display
result_dicts = [batch.to_pydict() for batch in results]

print(result_dicts)

This will output:

[{'a': [3, 4, 5], 'b': [30, 40, 50]}]

Configuration

It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.

runtime = (
    RuntimeEnvBuilder()
    .with_disk_manager_os()
    .with_fair_spill_pool(10000000)
)
config = (
    SessionConfig()
    .with_create_default_catalog_and_schema(True)
    .with_default_catalog_and_schema("foo", "bar")
    .with_target_partitions(8)
    .with_information_schema(True)
    .with_repartition_joins(False)
    .with_repartition_aggregations(False)
    .with_repartition_windows(False)
    .with_parquet_pruning(False)
    .set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)

Refer to the API documentation for more information.

Printing the context will show the current configuration settings.

print(ctx)

Extensions

For information about how to extend DataFusion Python, please see the extensions page of the online documentation.

More Examples

See examples for more information.

Executing Queries with DataFusion

Running User-Defined Python Code

Substrait Support

How to install

uv

uv add datafusion

Pip

pip install datafusion
# or
python -m pip install datafusion

Conda

conda install -c conda-forge datafusion

You can verify the installation by running:

>>> import datafusion
>>> datafusion.__version__
'0.6.0'

How to develop

This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin. The Maturin tools used in this workflow can be installed either via uv or pip. Both approaches should offer the same experience. It is recommended to use uv since it has significant performance improvements over pip.

Currently for protobuf support either protobuf or cmake must be installed.

Bootstrap (uv):

By default uv will attempt to build the datafusion python package. For our development we prefer to build manually. This means that when creating your virtual environment using uv sync you need to pass in the additional --no-install-package datafusion and for uv run commands the additional parameter --no-project

# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# cd to the repo root
cd datafusion-python/
# create the virtual environment
uv sync --dev --no-install-package datafusion
# activate the environment
source .venv/bin/activate

Bootstrap (pip):

# fetch this repo
git clone git@github.com:apache/datafusion-python.git
# cd to the repo root
cd datafusion-python/
# prepare development environment (used to build wheel / install in development)
python3 -m venv .venv
# activate the venv
source .venv/bin/activate
# update pip itself if necessary
python -m pip install -U pip
# install dependencies
python -m pip install -r pyproject.toml

The tests rely on test data in git submodules.

git submodule update --init

Whenever rust code changes (your changes or via git pull):

# make sure you activate the venv using "source venv/bin/activate" first
maturin develop --uv
python -m pytest

Alternatively if you are using uv you can do the following without needing to activate the virtual environment:

uv run --no-project maturin develop --uv
uv --no-project pytest .

Running & Installing pre-commit hooks

datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the developer but certainly helpful for keeping PRs clean and concise.

Our pre-commit hooks can be installed by running pre-commit install, which will install the configurations in your DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete the commit if an offending lint is found allowing you to make changes locally before pushing.

The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files.

NOTE: the current pre-commit hooks require docker, and cmake. See note on protobuf above.

Running linters without using pre-commit

There are scripts in ci/scripts for running Rust and Python linters.

./ci/scripts/python_lint.sh
./ci/scripts/rust_clippy.sh
./ci/scripts/rust_fmt.sh
./ci/scripts/rust_toml_fmt.sh

How to update dependencies

To change test dependencies, change the pyproject.toml and run

uv sync --dev --no-install-package datafusion

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafusion-50.1.0.tar.gz (188.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

datafusion-50.1.0-cp39-abi3-win_amd64.whl (31.3 MB view details)

Uploaded CPython 3.9+Windows x86-64

datafusion-50.1.0-cp39-abi3-manylinux_2_28_aarch64.whl (30.0 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.28+ ARM64

datafusion-50.1.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (32.2 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.17+ x86-64

datafusion-50.1.0-cp39-abi3-macosx_11_0_arm64.whl (26.1 MB view details)

Uploaded CPython 3.9+macOS 11.0+ ARM64

datafusion-50.1.0-cp39-abi3-macosx_10_12_x86_64.whl (29.3 MB view details)

Uploaded CPython 3.9+macOS 10.12+ x86-64

File details

Details for the file datafusion-50.1.0.tar.gz.

File metadata

  • Download URL: datafusion-50.1.0.tar.gz
  • Upload date:
  • Size: 188.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for datafusion-50.1.0.tar.gz
Algorithm Hash digest
SHA256 d8b8f027c7ce2498cda1589d3ce6d8720798963e031660fbe4d2e26e172442ec
MD5 ed7b9536a7637829433ff80ba7c7eb3d
BLAKE2b-256 facce8e8f7c472e93e7a560203ac40ac319b926029007c0dad873dbba97f9f2d

See more details on using hashes here.

File details

Details for the file datafusion-50.1.0-cp39-abi3-win_amd64.whl.

File metadata

  • Download URL: datafusion-50.1.0-cp39-abi3-win_amd64.whl
  • Upload date:
  • Size: 31.3 MB
  • Tags: CPython 3.9+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for datafusion-50.1.0-cp39-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 145c8f2e969c9cc51dc6af8a185ec39739ebeb5d680f9fe0020e005564ed40a8
MD5 ffe2dc77053a2dfab0e3f58e2da28219
BLAKE2b-256 51a341ef1c565770ef0c4060ee3fd50367dd06816f70a5be1ef41fbd7c3975e8

See more details on using hashes here.

File details

Details for the file datafusion-50.1.0-cp39-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for datafusion-50.1.0-cp39-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 5c9c2f70922ddedf54d8abd4ba9585a5026c3409438f5aafc1ad0428a67a4d1f
MD5 4595939aefb5f1b5d164625528309c16
BLAKE2b-256 119aafce9586145b3ed153d75364b21102a6a95260940352e06b7c6709e9d2db

See more details on using hashes here.

File details

Details for the file datafusion-50.1.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-50.1.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 49f5bd0edb2bf2d00625beeb46a115e1421db2e1b14b535f7c17cc0927f36b8a
MD5 8a405ac159f6c1e59152fb1128da373c
BLAKE2b-256 00ba8d8aa1df96e0666752e5c9d406d440495df2014d315b2a95bbef9856b23e

See more details on using hashes here.

File details

Details for the file datafusion-50.1.0-cp39-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for datafusion-50.1.0-cp39-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 85727df82c818103092c3ee18d198365833d3e44c2921d2b378d4d682798e511
MD5 a136c5665de624a6c8bab51dcddb4bc3
BLAKE2b-256 db582dc473240f552d3620186b527c04397f82b36f02243afaf49f0813c84a17

See more details on using hashes here.

File details

Details for the file datafusion-50.1.0-cp39-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for datafusion-50.1.0-cp39-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 aeaa3c7bcf630bbea962b8fe75d300d98eaf7e2a5edf98e6a0130a1bec3543ea
MD5 e28617e8bbddb1710264a9435388d73b
BLAKE2b-256 1f6ef9e2d5d935024a79fd549b5ce1d05549d26a027aab800727d492ac036504

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page