Skip to main content

High-performance bulk data movement for Redshift

Project description

Arrowjet

The fastest way to move data in and out of Redshift.

Arrowjet uses Redshift's native COPY and UNLOAD commands — the same paths AWS uses internally — wrapped in a simple Python API with automatic S3 staging, cleanup, and error handling.

pip install arrowjet              # core (BYOC Engine, CLI basics)
pip install arrowjet[redshift]    # + Redshift driver (arrowjet.connect())
pip install arrowjet[full]        # + Redshift + SQLAlchemy

Why Arrowjet

Standard Redshift drivers fetch data row-by-row over the wire. For large datasets, this is the bottleneck.

Arrowjet routes through S3 instead — parallel, columnar, and fast.

Approach 1M rows 10M rows
cursor.fetchall() ~11s ~105s
Manual UNLOAD ~7s ~58s
Arrowjet ~4s ~34s

Benchmarked on a 4-node ra3.large cluster, EC2 same region.


Quick Start

import arrowjet

conn = arrowjet.connect(
    host="your-cluster.region.redshift.amazonaws.com",
    database="dev",
    user="awsuser",
    password="...",
    staging_bucket="your-staging-bucket",
    staging_iam_role="arn:aws:iam::123456789:role/RedshiftS3Role",
    staging_region="us-east-1",
)

# Bulk read — UNLOAD → S3 → Parquet → Arrow
result = conn.read_bulk("SELECT * FROM events WHERE date > '2025-01-01'")
df = result.to_pandas()
print(f"{result.rows:,} rows in {result.total_time_s}s")

# Bulk write — Arrow → Parquet → S3 → COPY
conn.write_dataframe(my_dataframe, "target_table")

# Safe mode — standard DBAPI for small queries, transactions, metadata
df = conn.fetch_dataframe("SELECT COUNT(*) FROM events")

Bring Your Own Connection

Already have connection management? Use the Engine API — no rewiring needed.

import arrowjet
import redshift_connector  # or psycopg2, or any DBAPI connection

# Your existing connection
conn = redshift_connector.connect(host=..., database=..., ...)

# Arrowjet just does the bulk part
engine = arrowjet.Engine(
    staging_bucket="your-bucket",
    staging_iam_role="arn:aws:iam::123:role/RedshiftS3Role",
    staging_region="us-east-1",
)

result = engine.read_bulk(conn, "SELECT * FROM events")
engine.write_dataframe(conn, df, "target_table")

Works with redshift_connector, psycopg2, ADBC, or any DBAPI-compatible connection.


Integrations

Airflow

@task
def export_sample():
    import arrowjet
    conn = arrowjet.connect(host=..., staging_bucket=..., ...)
    result = conn.read_bulk("SELECT * FROM benchmark_test_1m LIMIT 1000")
    print(f"Exported {result.rows:,} rows in {result.total_time_s}s")
    conn.close()

See examples/airflow/ for before/after comparison and benchmarks.

dbt

# Run dbt transforms, then bulk-export the results
bash examples/dbt/run_with_arrowjet.sh

See examples/dbt/ for the full setup.

SQLAlchemy

from sqlalchemy import create_engine
engine = create_engine("redshift+arrowjet://user:pass@host:5439/dev")

When to Use Each Mode

Mode Use when How
read_bulk Large SELECT (100K+ rows) UNLOAD → S3 → Parquet → Arrow
write_bulk / write_dataframe Loading data (any size) Arrow → Parquet → S3 → COPY
fetch_dataframe Small queries, transactions PostgreSQL wire protocol

Configuration

conn = arrowjet.connect(
    # Redshift connection
    host="...", database="dev", user="awsuser", password="...",

    # S3 staging (required for bulk mode)
    staging_bucket="my-bucket",
    staging_iam_role="arn:aws:iam::123:role/RedshiftS3",
    staging_region="us-east-1",

    # Optional
    staging_prefix="arrowjet-staging",   # S3 key prefix
    staging_cleanup="on_success",        # always | on_success | never | ttl_managed
    staging_encryption="none",           # none | sse_s3 | sse_kms
    max_concurrent_bulk_ops=4,
)

See docs/configuration.md for the full reference.


CLI

arrowjet configure                                          # set up connection profile
arrowjet export --query "SELECT * FROM sales" --to ./out.parquet
arrowjet preview --file ./out.parquet
arrowjet validate --table sales --row-count

See docs/cli_reference.md for full details.


Requirements

  • Python 3.10+
  • Redshift cluster (provisioned or serverless)
  • S3 bucket in the same region (for bulk mode)
  • IAM role with S3 access attached to the Redshift cluster

See docs/iam_setup.md for IAM configuration.


License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

arrowjet-0.1.1.tar.gz (56.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

arrowjet-0.1.1-py3-none-any.whl (50.3 kB view details)

Uploaded Python 3

File details

Details for the file arrowjet-0.1.1.tar.gz.

File metadata

  • Download URL: arrowjet-0.1.1.tar.gz
  • Upload date:
  • Size: 56.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for arrowjet-0.1.1.tar.gz
Algorithm Hash digest
SHA256 faf4cb8bb5e170d09a97b3b3d2dfc70bb8dd09e80c2568d2b6b48bf589780c04
MD5 710e275507fd6df7e0914ecf7b30c242
BLAKE2b-256 f43848b94bdb311bf67cbfade2c2d75fe35dfed9e65e03274c9825ffef78bd50

See more details on using hashes here.

File details

Details for the file arrowjet-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: arrowjet-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 50.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for arrowjet-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ace5fad244c2e7a304f555a0703ca123d9294c4bc49fc08b0fb8f7466872eac4
MD5 6d059836684a4400a4a3cfbb643e9897
BLAKE2b-256 fa108f871801fe73607cfac54223f7b17fc386d5a9cd465644a4208436481bb9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page