Skip to main content

Utility belt to handle data on AWS.

Project description

AWS Data Wrangler (beta)

Utility belt to handle data on AWS.


Contents: Use Cases | Installation | Examples | Diving Deep


Use Cases

  • Pandas -> Parquet (S3)
  • Pandas -> CSV (S3)
  • Pandas -> Glue Catalog
  • Pandas -> Athena
  • Pandas -> Redshift
  • CSV (S3) -> Pandas (One shot or Batching)
  • Athena -> Pandas (One shot or Batching)
  • PySpark -> Redshift
  • Delete S3 objects (parallel :rocket:)
  • Encrypt S3 data with KMS keys

Installation

pip install awswrangler

Runs only with Python 3.6 and beyond.

Runs anywhere (AWS Lambda, AWS Glue, EMR, EC2, on-premises, local, etc).

P.S. Lambda Layer bundle and Glue egg are available to download. It's just upload to your account and run! :rocket:

Examples

Writing Pandas Dataframe to S3 + Glue Catalog

session = awswrangler.Session()
session.pandas.to_parquet(
    dataframe=dataframe,
    database="database",
    path="s3://...",
    partition_cols=["col_name"],
)

If a Glue Database name is passed, all the metadata will be created in the Glue Catalog. If not, only the s3 data write will be done.

Writing Pandas Dataframe to S3 as Parquet encrypting with a KMS key

extra_args = {
    "ServerSideEncryption": "aws:kms",
    "SSEKMSKeyId": "YOUR_KMY_KEY_ARN"
}
session = awswrangler.Session(s3_additional_kwargs=extra_args)
session.pandas.to_parquet(
    path="s3://..."
)

Reading from AWS Athena to Pandas

session = awswrangler.Session()
dataframe = session.pandas.read_sql_athena(
    sql="select * from table",
    database="database"
)

Reading from AWS Athena to Pandas in chunks (For memory restrictions)

session = awswrangler.Session()
dataframe_iter = session.pandas.read_sql_athena(
    sql="select * from table",
    database="database",
    max_result_size=512_000_000  # 512 MB
)
for dataframe in dataframe_iter:
    print(dataframe)  # Do whatever you want

Reading from S3 (CSV) to Pandas

session = awswrangler.Session()
dataframe = session.pandas.read_csv(path="s3://...")

Reading from S3 (CSV) to Pandas in chunks (For memory restrictions)

session = awswrangler.Session()
dataframe_iter = session.pandas.read_csv(
    path="s3://...",
    max_result_size=512_000_000  # 512 MB
)
for dataframe in dataframe_iter:
    print(dataframe)  # Do whatever you want

Typical Pandas ETL

import pandas
import awswrangler

df = pandas.read_...  # Read from anywhere

# Typical Pandas, Numpy or Pyarrow transformation HERE!

session = awswrangler.Session()
session.pandas.to_parquet(  # Storing the data and metadata to Data Lake
    dataframe=dataframe,
    database="database",
    path="s3://...",
    partition_cols=["col_name"],
)

Loading Pyspark Dataframe to Redshift

session = awswrangler.Session(spark_session=spark)
session.spark.to_redshift(
    dataframe=df,
    path="s3://...",
    connection=conn,
    schema="public",
    table="table",
    iam_role="IAM_ROLE_ARN",
    mode="append",
)

Deleting a bunch of S3 objects

session = awswrangler.Session()
session.s3.delete_objects(path="s3://...")

Diving Deep

Pandas to Redshift Flow

Pandas to Redshift Flow

Spark to Redshift Flow

Spark to Redshift Flow

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

awswrangler-0.0b32.tar.gz (24.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

awswrangler-0.0b32-py36,py37-none-any.whl (28.2 kB view details)

Uploaded Python 3.6,py37

File details

Details for the file awswrangler-0.0b32.tar.gz.

File metadata

  • Download URL: awswrangler-0.0b32.tar.gz
  • Upload date:
  • Size: 24.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.34.0 CPython/3.6.8

File hashes

Hashes for awswrangler-0.0b32.tar.gz
Algorithm Hash digest
SHA256 a53b5424150671f0b78301d73e94f5a9c0d82cc6d89ddb63313db36b50091439
MD5 09667a70c7aafc735888e41b96f66431
BLAKE2b-256 2c06ed5bde598f853c704607c7e76fdd70aba450b1143d46e3e1ca855ea19f61

See more details on using hashes here.

File details

Details for the file awswrangler-0.0b32-py36,py37-none-any.whl.

File metadata

  • Download URL: awswrangler-0.0b32-py36,py37-none-any.whl
  • Upload date:
  • Size: 28.2 kB
  • Tags: Python 3.6,py37
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.34.0 CPython/3.6.8

File hashes

Hashes for awswrangler-0.0b32-py36,py37-none-any.whl
Algorithm Hash digest
SHA256 c98fc6c6c8b78dc546888dfc42d8dc1bf7eca79609898c03c2be528f1f11d489
MD5 efce31b193ea61bb4d2a8797326d2a29
BLAKE2b-256 4c4d6c5055c9b7266f1abc0c776b90d08a296bd8ec8227939364e4524432bf86

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page