Skip to main content

Read the data of an ODBC data source as sequence of Apache Arrow record batches.

Project description

arrow-odbc-py

Licence PyPI version Documentation Status

Fill Apache Arrow arrays from ODBC data sources. This package is build on top of the pyarrow Python package and arrow-odbc Rust crate and enables you to read the data of an ODBC data source as sequence of Apache Arrow record batches.

  • Fast. Makes efficient use of ODBC bulk reads and writes, to lower IO overhead.
  • Flexible. Query any ODBC data source you have a driver for. MySQL, MS SQL, Excel, ...
  • Portable. Easy to install and update dependencies. No binary dependency to specific implemenations of Python interpreter, Arrow or ODBC driver manager.

About Arrow

Apache Arrow defines a language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware like CPUs and GPUs. The Arrow memory format also supports zero-copy reads for lightning-fast data access without serialization overhead.

About ODBC

ODBC (Open DataBase Connectivity) is a standard which enables you to access data from a wide variaty of data sources using SQL.

Usage

Query

from arrow_odbc import read_arrow_batches_from_odbc

connection_string="Driver={ODBC Driver 18 for SQL Server};Server=localhost;TrustServerCertificate=yes;"

reader = read_arrow_batches_from_odbc(
    query=f"SELECT * FROM MyTable WHERE a=?",
    connection_string=connection_string,
    parameters=["I'm a positional query parameter"],
    user="SA",
    password="My@Test@Password",
)

for batch in reader:
    # Process arrow batches
    df = batch.to_pandas()
    # ...

Insert

from arrow_odbc import insert_into_table
import pyarrow as pa
import pandas


def dataframe_to_table(df):
    table = pa.Table.from_pandas(df)
    reader = pa.RecordBatchReader.from_batches(table.schema, table.to_batches())
    insert_into_table(
        connection_string=connection_string,
        user="SA",
        password="My@Test@Password",
        chunk_size=1000,
        table="MyTable",
        reader=reader,
    )

Installation

Installing ODBC driver manager

The provided wheels dynamically link against the driver manager, which must be provided by the system.

Windows

Nothing to do. ODBC driver manager is preinstalled.

Ubuntu

sudo apt-get install unixodbc-dev

OS-X

You can use homebrew to install UnixODBC

brew install unixodbc

Installing the wheel

This package has been designed to be easily deployable, so it provides a prebuild many linux wheel which is independent of the specific version of your Python interpreter and the specific Arrow Version you want to use. It will dynamically link against the ODBC driver manager provided by your system.

Wheels have been uploaded to PyPi and can be installed using pip. The wheel (including the manylinux wheel) will link against the your system ODBC driver manager at runtime. If there are no prebuild wheels for your platform, you can build the wheel from source. For this the rust toolchain must be installed.

pip install arrow-odbc

arrow-odbc utilizes cffi and the Arrow C-Interface to glue Rust and Python code together. Therefore the wheel does not need to be build against the precise version either of Python or Arrow.

Installing with conda

conda install -c conda-forge arrow-odbc

Warning: The conan recipie is currently unmaintained. So to install the newest version you need to either install from source or use a wheel deployed via pip.

Building wheel from source

There is no ready made wheel for the platform you want to target? Do not worry, you can probably build it from source.

  • To build from source you need to install the Rust toolchain. Installation instruction can be found here: https://www.rust-lang.org/tools/install

  • Install ODBC driver manager. See above.

  • Build wheel

    python -m pip install build
    python -m build
    

Matching of ODBC to Arrow types then querying

ODBC Arrow
Numeric(p <= 38) Decimal128
Decimal(p <= 38, s >= 0) Decimal128
Integer Int32
SmallInt Int16
Real Float32
Float(p <=24) Float32
Double Float64
Float(p > 24) Float64
Date Date32
LongVarbinary Binary
Timestamp(p = 0) TimestampSecond
Timestamp(p: 1..3) TimestampMilliSecond
Timestamp(p: 4..6) TimestampMicroSecond
Timestamp(p >= 7 ) TimestampNanoSecond
BigInt Int64
TinyInt Signed Int8
TinyInt Unsigned UInt8
Bit Boolean
Varbinary Binary
Binary FixedSizedBinary
All others Utf8

Matching of Arrow to ODBC types then inserting

Arrow ODBC
Utf8 VarChar
Decimal128(p, s = 0) VarChar(p + 1)
Decimal128(p, s != 0) VarChar(p + 2)
Decimal128(p, s < 0) VarChar(p - s + 1)
Decimal256(p, s = 0) VarChar(p + 1)
Decimal256(p, s != 0) VarChar(p + 2)
Decimal256(p, s < 0) VarChar(p - s + 1)
Int8 TinyInt
Int16 SmallInt
Int32 Integer
Int64 BigInt
Float16 Real
Float32 Real
Float64 Double
Timestamp s Timestamp(7)
Timestamp ms Timestamp(7)
Timestamp us Timestamp(7)
Timestamp ns Timestamp(7)
Date32 Date
Date64 Date
Time32 s Time
Time32 ms VarChar(12)
Time64 us VarChar(15)
Time64 ns VarChar(16)
Binary Varbinary
FixedBinary(l) Varbinary(l)
All others Unsupported

Comparision to other Python ODBC bindings

  • pyodbc - General purpose ODBC python bindings. In contrast arrow-odbc is specifically concerned with bulk reads and writes to arrow arrays.
  • turbodbc - Complies with the Python Database API Specification 2.0 (PEP 249) which arrow-odbc does not aim to do. Like arrow-odbc bulk read and writes is the strong point of turbodbc. turbodbc has more system dependencies, which can make it cumbersome to install if not using conda. turbodbc is build against the C++ implementation of Arrow, which implies it is only compatible with matching version of pyarrow.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arrow_odbc-8.1.1.tar.gz (58.3 kB view details)

Uploaded Source

Built Distributions

arrow_odbc-8.1.1-py3-none-win_amd64.whl (434.4 kB view details)

Uploaded Python 3 Windows x86-64

arrow_odbc-8.1.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (636.1 kB view details)

Uploaded Python 3 manylinux: glibc 2.17+ x86-64

arrow_odbc-8.1.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (623.1 kB view details)

Uploaded Python 3 manylinux: glibc 2.17+ ARM64

arrow_odbc-8.1.1-py3-none-macosx_11_0_arm64.whl (544.7 kB view details)

Uploaded Python 3 macOS 11.0+ ARM64

arrow_odbc-8.1.1-py3-none-macosx_10_12_x86_64.whl (576.4 kB view details)

Uploaded Python 3 macOS 10.12+ x86-64

File details

Details for the file arrow_odbc-8.1.1.tar.gz.

File metadata

  • Download URL: arrow_odbc-8.1.1.tar.gz
  • Upload date:
  • Size: 58.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for arrow_odbc-8.1.1.tar.gz
Algorithm Hash digest
SHA256 03880cbef1492af9ee01f3f2f4fb4702ce7365341b8ed7400546bd50467c296b
MD5 ffc27804de1d978891a008a55289cd74
BLAKE2b-256 4330b0284714570a4474075b4fc6c63ce6759ee86aed47f8dd9ad38d21f19558

See more details on using hashes here.

File details

Details for the file arrow_odbc-8.1.1-py3-none-win_amd64.whl.

File metadata

  • Download URL: arrow_odbc-8.1.1-py3-none-win_amd64.whl
  • Upload date:
  • Size: 434.4 kB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.1

File hashes

Hashes for arrow_odbc-8.1.1-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 fc8fdec61a25ea1fc4e7771b80259577ab6b9e7c1f393673a6d1719285957ab7
MD5 995a52b92bfd1816f76175c484a82d39
BLAKE2b-256 0433f46e497e3cc73733b93667d3c184f63471ef044774bfae594f34f5a7e216

See more details on using hashes here.

File details

Details for the file arrow_odbc-8.1.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for arrow_odbc-8.1.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 81edf003e10eb543c5e0a7b8e2fd5029e6e4d5f872d764184ea1db41a645a150
MD5 d94c3ae52d20dacc1efc975f09af53e0
BLAKE2b-256 df8985696d82950128ee4d57f652b556f3640440f0021fc8ae310de7add95f0d

See more details on using hashes here.

File details

Details for the file arrow_odbc-8.1.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for arrow_odbc-8.1.1-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 238f4178e8cbcfea3ebc4a8f5532344a60cb36007a4ad59e5990bd0ee9d93ea6
MD5 6bb7e7beaf0ac070ae182fb198d5cb6c
BLAKE2b-256 b2b1de89e0dfd55eafd0fb3a5b3ba7a908ba079176ad1fef3fcf9f1e7a3830ed

See more details on using hashes here.

File details

Details for the file arrow_odbc-8.1.1-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for arrow_odbc-8.1.1-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 81aa52995ccbdb362581bf03e4c6a57bf062c2eb74be2590e19bf3d59fcf0576
MD5 9c3b45c6fa2841d72c3c251f50419c7c
BLAKE2b-256 54e9209f0aa01a194f1d4355889c0f0dc7f2b01368720b7e9913c3c727b885d0

See more details on using hashes here.

File details

Details for the file arrow_odbc-8.1.1-py3-none-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for arrow_odbc-8.1.1-py3-none-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 7834aae777b8fbadf209c7f6c7a316fe80a385646d2b210d92c4b2d4a0c6dbc3
MD5 c8a11f7aae747f72734588c542135324
BLAKE2b-256 2f8a55586eea4d16f2c81cda5855b3c607a4e3c7b94d5492d447534e5262dc7a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page