Pandas on AWS.
Project description
AWS Data Wrangler
Pandas on AWS
An AWS Professional Service open source initiative | aws-proserve-opensource@amazon.com
Source | Downloads | Page | Installation Command |
---|---|---|---|
PyPi | Link | pip install awswrangler |
|
Conda | Link | conda install -c conda-forge awswrangler |
Table of contents
Quick Start
Installation command: pip install awswrangler
import awswrangler as wr
import pandas as pd
df = pd.DataFrame({"id": [1, 2], "value": ["foo", "boo"]})
# Storing data on Data Lake
wr.s3.to_parquet(
df=df,
path="s3://bucket/dataset/",
dataset=True,
database="my_db",
table="my_table"
)
# Retrieving the data directly from Amazon S3
df = wr.s3.read_parquet("s3://bucket/dataset/", dataset=True)
# Retrieving the data from Amazon Athena
df = wr.athena.read_sql_query("SELECT * FROM my_table", database="my_db")
# Get Redshift connection (SQLAlchemy) from Glue and retrieving data from Redshift Spectrum
engine = wr.catalog.get_engine("my-redshift-connection")
df = wr.db.read_sql_query("SELECT * FROM external_schema.my_table", con=engine)
# Get MySQL connection (SQLAlchemy) from Glue Catalog and LOAD the data into MySQL
engine = wr.catalog.get_engine("my-mysql-connection")
wr.db.to_sql(df, engine, schema="test", name="my_table")
# Get PostgreSQL connection (SQLAlchemy) from Glue Catalog and LOAD the data into PostgreSQL
engine = wr.catalog.get_engine("my-postgresql-connection")
wr.db.to_sql(df, engine, schema="test", name="my_table")
Read The Docs
- What is AWS Data Wrangler?
- Install
- Tutorials
- 001 - Introduction
- 002 - Sessions
- 003 - Amazon S3
- 004 - Parquet Datasets
- 005 - Glue Catalog
- 006 - Amazon Athena
- 007 - Databases (Redshift, MySQL and PostgreSQL)
- 008 - Redshift - Copy & Unload.ipynb
- 009 - Redshift - Append, Overwrite and Upsert
- 010 - Parquet Crawler
- 011 - CSV Datasets
- 012 - CSV Crawler
- 013 - Merging Datasets on S3
- 014 - Schema Evolution
- 015 - EMR
- 016 - EMR & Docker
- 017 - Partition Projection
- 018 - QuickSight
- 019 - Athena Cache
- 020 - Spark Table Interoperability
- 021 - Global Configurations
- 022 - Writing Partitions Concurrently
- 023 - Flexible Partitions Filter
- 024 - Athena Query Metadata
- API Reference
- License
- Contributing
- Legacy Docs (pre-1.0.0)
Who uses AWS Data Wrangler?
Knowing which companies are using this library is important to help prioritize the project internally.
Please send a PR with your company name and @githubhandle if you may.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
awswrangler-1.8.1.tar.gz
(99.7 kB
view hashes)
Built Distributions
awswrangler-1.8.1-py3.6.egg
(266.4 kB
view hashes)
awswrangler-1.8.1-py3-none-any.whl
(125.1 kB
view hashes)
Close
Hashes for awswrangler-1.8.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dcee84c566bfbd9d3ea7dd923eab7c5b2650edcd1a0748ad04ebf57543131646 |
|
MD5 | d90b2f25a45db2992d23395cf65624bd |
|
BLAKE2b-256 | b71cc76487c0dd3fe6e81aecc7ed8f6c06f9675424f85b3d1cca66a629f4689e |