Skip to main content

Querry Xarray with SQL.

Project description

xarray-sql

Query Xarray with SQL

ci lint

What is this?

This is an experiment to provide a SQL interface for raster data.

import xarray as xr
import xarray_sql as qr

ds = xr.tutorial.open_dataset('air_temperature')

# The same as a dask-sql Context; i.e. an Apache DataFusion Context.
c = qr.Context(ds)
c.create_table('air', ds, chunks=dict(time=24))

df = c.sql('''
  SELECT
    "lat", "lon", AVG("air") as air_total
  FROM 
    "air" 
  GROUP BY
   "lat", "lon"
''')

# A table of the average temperature for each location across time.
df.compute()

# Alternatively, you can just create the DataFrame from the Dataset:
df = qr.read_xarray(ds)
df.head()

Succinctly, we "pivot" Xarray Datasets to treat them like tables so we can run SQL queries against them.

Why build this?

A few reasons:

  • Even though SQL is the lingua franca of data, scientific datasets are often inaccessible to non-scientists.
  • Joining tabular data with raster data is common yet difficult. It could be easy.
  • There are many cloud-native, Xarray-openable datasets, from Google Earth Engine to Pangeo Forge. Wouldn’t it be great if these were also SQL-accessible? How can the bridge be built with minimal effort?

This is a light-weight way to prove the value of the interface.

The larger goal is to explore the hypothesis that Pangeo is a scientific database. Here, xarray-sql can be thought of as a missing DB front end.

How does it work?

All chunks in an Xarray Dataset are transformed into a Dask DataFrame via from_map() and to_dataframe(). For SQL support, we just use dask-sql. That's it!

Why does this work?

Underneath Xarray, Dask, and Pandas, there are NumPy arrays. These are paged in chucks and represented contiguously in memory. It is only a matter of metadata that breaks them up into ndarrays. to_dataframe() just changes this metadata (via a ravel()/reshape()), back into a column amenable to a DataFrame.

There is added overhead from duplicating dimensions as columns, which we see as worth the convenience of DataFrames.

What are the current limitations?

Dask doesn't support MultiIndexs (dask/dask#1493). If it did, I suspect performance for many types of queries would greatly improve.

Further, while this does play well with dask-geopandas (for geospatial query support), certain types of operations don't quite match standard geopandas. Spatial joins come to mind as a killer feature, but only inner joins are supported (geopandas/dask-geopandas#72) .

What would a deeper integration look like?

I have a few ideas so far. One approach involves applying operations directly on Xarray Datasets. This approach is being pursued here, as xql.

Deeper still: I was thinking we could make a virtual filesystem for parquet that would internally map to Zarr. Raster-backed virtual parquet would open up integrations to numeroustools like dask, pyarrow, duckdb, and BigQuery. More thoughts on this in #4.

Sponsors & Contributors

I want to give a special thanks to the following folks and institutions:

  • Pramod Gupta and the Anthromet Team at Google Research for the problem formation and design inspiration.
  • Jake Wall and AI2/Ecoscope for compute resources and key use cases.
  • Charles Stern, Stephan Hoyer, and Qiusheng Wu for the early review and discussion of this project.

License

Copyright 2024 Alexander Merose

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Some sources are re-distributed from Google LLC via https://github.com/google/Xee (also Apache-2.0 License) with and without modification (specifically, Github Actions workflows). These files are subject to the original copyright; they include the original license header comment as well as a note to indicate modifications (when appropriate).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xarray_sql-0.0.2.tar.gz (1.4 MB view details)

Uploaded Source

Built Distribution

xarray_sql-0.0.2-py3-none-any.whl (13.0 kB view details)

Uploaded Python 3

File details

Details for the file xarray_sql-0.0.2.tar.gz.

File metadata

  • Download URL: xarray_sql-0.0.2.tar.gz
  • Upload date:
  • Size: 1.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for xarray_sql-0.0.2.tar.gz
Algorithm Hash digest
SHA256 9ab5cdb646eb0497a347113ca836584200b760bc7ee2dc88bf782fb8d76ba75a
MD5 a380cbb4bad4e60d72b121365b9efbb2
BLAKE2b-256 e72109c09f1e3bedfa1daa96eabf38e0a569c78aeaa1d45180c6c383aef351d4

See more details on using hashes here.

File details

Details for the file xarray_sql-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: xarray_sql-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 13.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.18

File hashes

Hashes for xarray_sql-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8a98bdd2e090b68edc2122e157b28165e10d5cf55268ee077ae339723708090b
MD5 1c1844692d8b80c2b438aaacfae7b5c2
BLAKE2b-256 7c339057e2ddb0ec4820576d42615780c4eab6664f4ada12d63b14fc8691a075

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page