Skip to main content

The missing Link between AWS services and the most popular Python data libraries

Project description

AWS Data Wrangler (BETA)

Code style: black Documentation Status

The missing link between AWS services and the most popular Python data libraries.

CAUTION: This project is in BETA version. And was not tested in battle yet.

Read the docs!

Check how AWS Wrangler can process small data more than 10x cheaper and 10x faster than Spark!

AWS Data Wrangler aims to fill a gap between AWS Analytics Services (Glue, Athena, EMR, Redshift) and the most popular Python libraries for lightweight workloads.

The rationale behind AWS Data Wrangler is to use the right tool for each job. And this project was developed with the lightweight jobs in mind. That is never so clear and depends of a lot of different factors, but a good rule of thumb that we discovered during the tests is that if your workload is something around 5 GB in plan text or less, so you should go with AWS Data Wrangler instead of the big data tools.

Usually there are two different types of use cases when dealing with data, heavy workloads which are dealt better using distributed tools and services like EMR/Spark and lightweight workloads that can be treated most efficient using simpler tools, and this is when aws data wrangler comes into action.

For example, in AWS Glue you can choose between two different types of Job, distributed with Apache Spark or single node with Python Shell. In this case data wrangler would use the single node with Python Shell job option (Or even AWS Lambda), resulting in less cost and more speed.

Rationale Image


Contents: Installation | Usage | Known Limitations | Contributing | Dependencies | License


Installation

pip install awswrangler

AWS Data Wrangler runs only Python 3.6 and beyond. And runs on AWS Lambda, AWS Glue, EC2, on-premises and local.

P.S. The Lambda Layer bundle and the Glue egg are available to download. It's just upload to your account and run! :rocket:

Usage

Writing Pandas Dataframe to Data Lake:

session = awswrangler.Session()
session.pandas.to_parquet(
    dataframe=dataframe,
    database="database",
    path="s3://...",
    partition_cols=["col_name"],
)

If a Glue Database name is passed, all the metadata will be created in the Glue Catalog. If not, only the s3 data write will be done.

Reading from Data Lake to Pandas Dataframe:

session = awswrangler.Session()
dataframe = session.pandas.read_sql_athena(
    sql="select * from table",
    database="database"
)

Reading from S3 file to Pandas Dataframe:

session = awswrangler.Session()
dataframe = session.pandas.read_csv(path="s3://...")

Typical ETL:

import pandas
import awswrangler

df = pandas.read_...  # Read from anywhere

# Typical Pandas, Numpy or Pyarrow transformation HERE!

session = awswrangler.Session()
session.pandas.to_parquet(  # Storing the data and metadata to Data Lake
    dataframe=dataframe,
    database="database",
    path="s3://...",
    partition_cols=["col_name"],
)

Dependencies

AWS Data Wrangler project relies on others great initiatives:

Known Limitations

  • By now only writes in Parquet and CSV file formats
  • By now there are not compression support
  • By now there are not nested type support

Contributing

For almost all features we need rely on AWS Services that didn't have mock tools in the community yet (AWS Glue, AWS Athena). So we are focusing on integration tests instead unit tests.

So, you will need provide a S3 bucket and a Glue/Athena database through environment variables.

export AWSWRANGLER_TEST_BUCKET=...

export AWSWRANGLER_TEST_DATABASE=...

CAUTION: This may this may incur costs in your AWS Account

make init

Make your changes...

make format

make lint

make test

License

This library is licensed under the Apache 2.0 License.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

awswrangler-0.0b4.tar.gz (17.3 kB view hashes)

Uploaded Source

Built Distribution

awswrangler-0.0b4-py36,py37-none-any.whl (18.2 kB view hashes)

Uploaded Python 3.6,py37

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page