Pandas on AWS.
Project description
AWS Data Wrangler
Pandas on AWS
Easy integration with Athena, Glue, Redshift, Timestream, QuickSight, Chime, CloudWatchLogs, DynamoDB, EMR, SecretManager, PostgreSQL, MySQL, SQLServer and S3 (Parquet, CSV, JSON and EXCEL).
An AWS Professional Service open source initiative | aws-proserve-opensource@amazon.com
Source | Downloads | Installation Command |
---|---|---|
PyPi | pip install awswrangler |
|
Conda | conda install -c conda-forge awswrangler |
⚠️ For platforms without PyArrow 3 support (e.g. EMR, Glue PySpark Job, MWAA):
➡️pip install pyarrow==2 awswrangler
Table of contents
- Quick Start
- Read The Docs
- Getting Help
- Community Resources
- Logging
- Who uses AWS Data Wrangler?
- What is Amazon Sagemaker Data Wrangler?
Quick Start
Installation command: pip install awswrangler
⚠️ For platforms without PyArrow 3 support (e.g. EMR, Glue PySpark Job, MWAA):
➡️pip install pyarrow==2 awswrangler
import awswrangler as wr
import pandas as pd
from datetime import datetime
df = pd.DataFrame({"id": [1, 2], "value": ["foo", "boo"]})
# Storing data on Data Lake
wr.s3.to_parquet(
df=df,
path="s3://bucket/dataset/",
dataset=True,
database="my_db",
table="my_table"
)
# Retrieving the data directly from Amazon S3
df = wr.s3.read_parquet("s3://bucket/dataset/", dataset=True)
# Retrieving the data from Amazon Athena
df = wr.athena.read_sql_query("SELECT * FROM my_table", database="my_db")
# Get a Redshift connection from Glue Catalog and retrieving data from Redshift Spectrum
con = wr.redshift.connect("my-glue-connection")
df = wr.redshift.read_sql_query("SELECT * FROM external_schema.my_table", con=con)
con.close()
# Amazon Timestream Write
df = pd.DataFrame({
"time": [datetime.now(), datetime.now()],
"my_dimension": ["foo", "boo"],
"measure": [1.0, 1.1],
})
rejected_records = wr.timestream.write(df,
database="sampleDB",
table="sampleTable",
time_col="time",
measure_col="measure",
dimensions_cols=["my_dimension"],
)
# Amazon Timestream Query
wr.timestream.query("""
SELECT time, measure_value::double, my_dimension
FROM "sampleDB"."sampleTable" ORDER BY time DESC LIMIT 3
""")
Read The Docs
- What is AWS Data Wrangler?
- Install
- Tutorials
- 001 - Introduction
- 002 - Sessions
- 003 - Amazon S3
- 004 - Parquet Datasets
- 005 - Glue Catalog
- 006 - Amazon Athena
- 007 - Databases (Redshift, MySQL, PostgreSQL and SQL Server)
- 008 - Redshift - Copy & Unload.ipynb
- 009 - Redshift - Append, Overwrite and Upsert
- 010 - Parquet Crawler
- 011 - CSV Datasets
- 012 - CSV Crawler
- 013 - Merging Datasets on S3
- 014 - Schema Evolution
- 015 - EMR
- 016 - EMR & Docker
- 017 - Partition Projection
- 018 - QuickSight
- 019 - Athena Cache
- 020 - Spark Table Interoperability
- 021 - Global Configurations
- 022 - Writing Partitions Concurrently
- 023 - Flexible Partitions Filter
- 024 - Athena Query Metadata
- 025 - Redshift - Loading Parquet files with Spectrum
- 026 - Amazon Timestream
- 027 - Amazon Timestream 2
- 028 - Amazon DynamoDB
- API Reference
- License
- Contributing
- Legacy Docs (pre-1.0.0)
Getting Help
The best way to interact with our team is through GitHub. You can open an issue and choose from one of our templates for bug reports, feature requests... You may also find help on these community resources:
- The #aws-data-wrangler Slack channel
- Ask a question on Stack Overflow
and tag it with
awswrangler
Community Resources
Please send a Pull Request with your resource reference and @githubhandle.
- Optimize Python ETL by extending Pandas with AWS Data Wrangler [@igorborgest]
- Reading Parquet Files With AWS Lambda [@anand086]
- Transform AWS CloudTrail data using AWS Data Wrangler [@anand086]
- Rename Glue Tables using AWS Data Wrangler [@anand086]
- Getting started on AWS Data Wrangler and Athena [@dheerajsharma21]
- Simplifying Pandas integration with AWS data related services [@bvsubhash]
- Build an ETL pipeline using AWS S3, Glue and Athena [@taupirho]
Logging
Enabling internal logging examples:
import logging
logging.basicConfig(level=logging.INFO, format="[%(name)s][%(funcName)s] %(message)s")
logging.getLogger("awswrangler").setLevel(logging.DEBUG)
logging.getLogger("botocore.credentials").setLevel(logging.CRITICAL)
Into AWS lambda:
import logging
logging.getLogger("awswrangler").setLevel(logging.DEBUG)
Who uses AWS Data Wrangler?
Knowing which companies are using this library is important to help prioritize the project internally.
Please send a Pull Request with your company name and @githubhandle if you may.
- Amazon
- AWS
- Cepsa [@alvaropc]
- Cognitivo [@msantino]
- Digio [@afonsomy]
- DNX [@DNXLabs]
- Funcional Health Tech [@webysther]
- Informa Markets [@mateusmorato]
- LINE TV [@bryanyang0528]
- Magnataur [@brianmingus2]
- M4U [@Thiago-Dantas]
- NBCUniversal [@vibe]
- nrd.io [@mrtns]
- OKRA Technologies [@JPFrancoia, @schot]
- Pier [@flaviomax]
- Pismo [@msantino]
- ringDNA [@msropp]
- Serasa Experian [@andre-marcos-perez]
- Shipwell [@zacharycarter]
- strongDM [@mrtns]
- Thinkbumblebee [@dheerajsharma21]
- Zillow [@nicholas-miles]
What is Amazon SageMaker Data Wrangler?
Amazon SageMaker Data Wrangler is a new SageMaker Studio feature that has a similar name but has a different purpose than the AWS Data Wrangler open source project.
-
AWS Data Wrangler is open source, runs anywhere, and is focused on code.
-
Amazon SageMaker Data Wrangler is specific for the SageMaker Studio environment and is focused on a visual interface.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for awswrangler-2.10.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d88295ec6a4c6d6e9b5519d65a9ccdd883aa590482dc76a7912eb62dcde7cad5 |
|
MD5 | f8f9e71f868eb470eec22d1aa1e574a2 |
|
BLAKE2b-256 | ce3ff1647ec73b94a1ea9efeff6cec50e590cf0a33e0978fb5a606816029360f |