Read the data of an ODBC data source as sequence of Apache Arrow record batches.
Project description
arrow-odbc-py
Fill Apache Arrow arrays from ODBC data sources. This package is build on top of the pyarrow
Python package and arrow-odbc
Rust crate and enables you to read the data of an ODBC data source as sequence of Apache Arrow record batches.
- Fast. Makes efficient use of ODBC bulk reads and writes, to lower IO overhead.
- Flexible. Query any ODBC data source you have a driver for. MySQL, MS SQL, Excel, ...
- Portable. Easy to install and update dependencies. No binary dependency to specific implemenations of Python interpreter, Arrow or ODBC driver manager.
About Arrow
Apache Arrow defines a language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware like CPUs and GPUs. The Arrow memory format also supports zero-copy reads for lightning-fast data access without serialization overhead.
About ODBC
ODBC (Open DataBase Connectivity) is a standard which enables you to access data from a wide variaty of data sources using SQL.
Usage
Query
from arrow_odbc import read_arrow_batches_from_odbc
connection_string="Driver={ODBC Driver 17 for SQL Server};Server=localhost;"
reader = read_arrow_batches_from_odbc(
query=f"SELECT * FROM MyTable WHERE a=?",
connection_string=connection_string,
parameters=["I'm a positional query parameter"],
user="SA",
password="My@Test@Password",
)
# Trade memory for speed. For the price of an additional transit buffer and a native system thread
# we fetch batches now concurrent to our application logic. Just remove this line, if you want to
# fetch sequentially in your main application thread.
reader.fetch_concurrently()
for batch in reader:
# Process arrow batches
df = batch.to_pandas()
# ...
Insert
from arrow_odbc import insert_into_table
import pyarrow as pa
import pandas
def dataframe_to_table(df):
table = pa.Table.from_pandas(df)
reader = pa.RecordBatchReader.from_batches(table.schema, table.to_batches())
insert_into_table(
connection_string=connection_string,
user="SA",
password="My@Test@Password",
chunk_size=1000,
table="MyTable",
reader=reader,
)
Installation
Installing ODBC driver manager
The provided wheels dynamically link against the driver manager, which must be provided by the system.
Windows
Nothing to do. ODBC driver manager is preinstalled.
Ubuntu
sudo apt-get install unixodbc-dev
OS-X
You can use homebrew to install UnixODBC
brew install unixodbc
Installing the wheel
This package has been designed to be easily deployable, so it provides a prebuild many linux wheel which is independent of the specific version of your Python interpreter and the specific Arrow Version you want to use. It will dynamically link against the ODBC driver manager provided by your system.
Wheels have been uploaded to PyPi
and can be installed using pip. The wheel (including the manylinux wheel) will link against the your system ODBC driver manager at runtime. If there are no prebuild wheels for your platform, you can build the wheel from source. For this the rust toolchain must be installed.
pip install arrow-odbc
arrow-odbc
utilizes cffi
and the Arrow C-Interface to glue Rust and Python code together. Therefore the wheel does not need to be build against the precise version either of Python or Arrow.
Installing with conda
conda install -c conda-forge arrow-odbc
Thanks to @timkpaine for maintaining the recipie!
Building wheel from source
There is no ready made wheel for the platform you want to target? Do not worry, you can probably build it from source.
-
To build from source you need to install the Rust toolchain. Installation instruction can be found here: https://www.rust-lang.org/tools/install
-
Install ODBC driver manager. See above.
-
Build wheel
python -m pip install build python -m build
Building wheel from source on Mac ARM
Following above instruction on an Mac ARM will lead to the build process erroring out with a message that the odbc
library can not be found for linkning. This is because brew
chooses to install the library into a different folder on this platform. One way to fix this is to create a symbolic link.
sudo ln -s /opt/homebrew/lib /Users/your_user_name/lib
Using this addition step cargo
from the rust build chain is able to find the odbc
library and to link against it. Alternatively you can install unixODBC from source using make
.
Matching of ODBC to Arrow types then querying
ODBC | Arrow |
---|---|
Numeric(p <= 38) | Decimal128 |
Decimal(p <= 38, s >= 0) | Decimal128 |
Integer | Int32 |
SmallInt | Int16 |
Real | Float32 |
Float(p <=24) | Float32 |
Double | Float64 |
Float(p > 24) | Float64 |
Date | Date32 |
LongVarbinary | Binary |
Timestamp(p = 0) | TimestampSecond |
Timestamp(p: 1..3) | TimestampMilliSecond |
Timestamp(p: 4..6) | TimestampMicroSecond |
Timestamp(p >= 7 ) | TimestampNanoSecond |
BigInt | Int64 |
TinyInt | Int8 |
Bit | Boolean |
Varbinary | Binary |
Binary | FixedSizedBinary |
All others | Utf8 |
Matching of Arrow to ODBC types then inserting
Arrow | ODBC |
---|---|
Utf8 | VarChar |
Decimal128(p, s = 0) | VarChar(p + 1) |
Decimal128(p, s != 0) | VarChar(p + 2) |
Decimal128(p, s < 0) | VarChar(p - s + 1) |
Decimal256(p, s = 0) | VarChar(p + 1) |
Decimal256(p, s != 0) | VarChar(p + 2) |
Decimal256(p, s < 0) | VarChar(p - s + 1) |
Int8 | TinyInt |
Int16 | SmallInt |
Int32 | Integer |
Int64 | BigInt |
Float16 | Real |
Float32 | Real |
Float64 | Double |
Timestamp s | Timestamp(7) |
Timestamp ms | Timestamp(7) |
Timestamp us | Timestamp(7) |
Timestamp ns | Timestamp(7) |
Date32 | Date |
Date64 | Date |
Time32 s | Time |
Time32 ms | VarChar(12) |
Time64 us | VarChar(15) |
Time64 ns | VarChar(16) |
Binary | Varbinary |
FixedBinary(l) | Varbinary(l) |
All others | Unsupported |
Comparision to other Python ODBC bindings
pyodbc
- General purpose ODBC python bindings. In contrastarrow-odbc
is specifically concerned with bulk reads and writes to arrow arrays.turbodbc
- Complies with the Python Database API Specification 2.0 (PEP 249) whicharrow-odbc
does not aim to do. Likearrow-odbc
bulk read and writes is the strong point ofturbodbc
.turbodbc
has more system dependencies, which can make it cumbersome to install if not using conda.turbodbc
is build against the C++ implementation of Arrow, which implies it is only compatible with matching version ofpyarrow
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
File details
Details for the file arrow_odbc-2.1.4.tar.gz
.
File metadata
- Download URL: arrow_odbc-2.1.4.tar.gz
- Upload date:
- Size: 51.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7b85815beb8935cd17a177803226b7b2e764113fb415cf020cffe04155564148 |
|
MD5 | 639bc529e3f377c20e626bcef9c2575a |
|
BLAKE2b-256 | 1fa023ba9b2139d539448feed8279196b1d82360ede55960fd18d335319fa990 |
File details
Details for the file arrow_odbc-2.1.4-py3-none-win_amd64.whl
.
File metadata
- Download URL: arrow_odbc-2.1.4-py3-none-win_amd64.whl
- Upload date:
- Size: 424.8 kB
- Tags: Python 3, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a87abeadad1b59417a9a91b6bc0d041a79fe8fce40251bbffdb290e27f630cd |
|
MD5 | 362b3512373f45a76b30a7c6fee70a37 |
|
BLAKE2b-256 | 80750b47f4e3bc40453646ff312766fb2d976fbefecd24beebdd618a2e4e488c |
File details
Details for the file arrow_odbc-2.1.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: arrow_odbc-2.1.4-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 1.1 MB
- Tags: Python 3, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d19176d27f60746951bb43faa2e8a0de3805f73f307120bdffb5c6274ef3f015 |
|
MD5 | f69a50313dc77900594fe74249d2eae6 |
|
BLAKE2b-256 | 7f991e04675ea3fd5012acc31117455bcfd9e46c1c0dbaa4ae332ca92e2bf025 |
File details
Details for the file arrow_odbc-2.1.4-py3-none-macosx_10_12_x86_64.whl
.
File metadata
- Download URL: arrow_odbc-2.1.4-py3-none-macosx_10_12_x86_64.whl
- Upload date:
- Size: 589.3 kB
- Tags: Python 3, macOS 10.12+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f61c1b355c560fdf4b25b535c1441a65b3fd7c6e7dc96f9a91640c4d6b8f336d |
|
MD5 | d3daf477e2b52642fe89eaa4f7c79fc1 |
|
BLAKE2b-256 | c52119785c606703f5a657f2e50aaf07bcfcb1d2c3fb123a3419e44c692fbab3 |