Tools for working with Redshift Spectrum.
Project description
Spectrify
A simple yet powerful tool to move your data from Redshift to Redshift Spectrum.
Free software: MIT license
Documentation: https://spectrify.readthedocs.io.
Features
One-liners to:
Export a Redshift table to S3 (CSV)
Convert exported CSVs to Parquet files in parallel
Create the Spectrum table on your Redshift cluster
Perform all 3 steps in sequence, essentially “copying” a Redshift table Spectrum in one command.
S3 credentials are specified using boto3. See http://boto3.readthedocs.io/en/latest/guide/configuration.html
Redshift credentials are supplied via environment variables, command-line parameters, or interactive prompt.
Install
$ pip install spectrify
Command-line Usage
Export Redshift table my_table to a folder of CSV files on S3:
$ spectrify --host=example-url.redshift.aws.com --user=myuser --db=mydb export my_table \
's3://example-bucket/my_table'
Convert exported CSVs to Parquet:
$ spectrify --host=example-url.redshift.aws.com --user=myuser --db=mydb convert my_table \
's3://example-bucket/my_table'
Create Spectrum table from S3 folder:
$ spectrify --host=example-url.redshift.aws.com --user=myuser --db=mydb create_table \
's3://example-bucket/my_table' my_table my_spectrum_table
Transform Redshift table by performing all 3 steps in sequence:
$ spectrify --host=example-url.redshift.aws.com --user=myuser --db=mydb transform my_table \
's3://example-bucket/my_table'
Python Usage
Currently, you’ll have to supply your own SQL Alchemy engine to each of the below commands (pull requests welcome to make this eaiser).
Export to S3:
from spectrify.export import export_to_csv
export_to_csv(sa_engine, table_name, s3_csv_dir)
Convert exported CSVs to Parquet:
from spectrify.convert import convert_redshift_manifest_to_parquet
from spectrify.utils.schema import get_table_schema
sa_table = get_table_schema(sa_engine, source_table_name)
convert_redshift_manifest_to_parquet(s3_csv_manifest_path, sa_table, s3_spectrum_dir)
Create Spectrum table from S3 parquet folder:
from spectrify.create import create_external_table
from spectrify.utils.schema import get_table_schema
sa_table = get_table_schema(sa_engine, source_table_name)
create_external_table(sa_engine, dest_schema, dest_table_name, sa_table, s3_spectrum_path)
Transform Redshift table by performing all 3 steps in sequence:
from spectrify.transform import transform_table
transform_table(sa_engine, table_name, s3_base_path, dest_schema, dest_table, num_workers)
Contribute
Contributions always welcome! Read our guide on contributing here: http://spectrify.readthedocs.io/en/latest/contributing.html
License
MIT License. Copyright (c) 2017, The Narrativ Company, Inc.
History
0.4.0 (2018-02-25)
Upgrade to pyarrow v0.8.0
Verify Redshift column types are supported before attempting conversion
Bugfix: Properly clean up multiprocessing.pool resource
0.3.0 (2017-10-30)
Support 16- and 32-bit integers
Packaging updates
0.2.1 (2017-09-27)
Fix Readme
0.2.0 (2017-09-27)
First release on PyPI.
0.1.0 (2017-09-13)
Didn’t even make it to PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for spectrify-0.4.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | c3f9c8561504c7556c8260fe31c9d00b63a4c6395fb77c17c5c4b8dbe1c3897c |
|
MD5 | 6619e4fb93ea00f2c2cc7ee5a628173f |
|
BLAKE2b-256 | 464140ff28c3677038cf74c1f1fff979f9c32a623ae9c350c1e7c4e7216a7c77 |