Skip to main content

Connector to upload csv/csv gzip files from S3 bucket into Redshift table.

Project description

S3ToRedshift

Connector to upload csv/csv gzip files from S3 bucket into Redshift table.

Requirements

  • Python 3+ (Tested in 3.7)
  • pandas>=0.25.0
  • GiantPandas>=0.1.7
  • S3Connector>=0.1.8

Install with pip

$ pip install S3ToRedshift

Usage

  1. Import the library.

    from S3ToRedshift import S3ToRedshift
    
  2. Create an instance by defining aws access credentials, redshift credentials and region name. These parameters might not be needed, depending on the machine"s access rights.

    s3_to_redshift = S3ToRedshift(
        aws_access_key_id="##########",
        aws_secret_access_key="##########",
        region_name="##########",
        host="##########",
        user="##########",
        password="##########",
        dbname="##########",
        port="##########",
        iam_role="##########"
    )
    
  3. The imported module has several functions. Please refer to respective help for more information.

    1. s3_to_redshift.upload_csv(schema_name, table_name, bucket_name, object_name, file_type, if_exists, csv_sep, csv_null_identifier)

© Samyak Ratna Tamrakar, 2019.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for S3ToRedshift, version 0.1.14
Filename, size File type Python version Upload date Hashes
Filename, size S3ToRedshift-0.1.14.tar.gz (3.2 kB) File type Source Python version None Upload date Hashes View

Supported by

Pingdom Pingdom Monitoring Google Google Object Storage and Download Analytics Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page