Data science on the cloud
Project description
Red Panda
Easily interact with cloud (AWS) in your Data Science workflow.
Features
- DataFrame/files to and from S3 and Redshift.
- Run queries on Redshift in Python.
- Use built-in Redshift admin queries, such as checking running queries and errors.
- Use Redshift utility functions to easily accomplish common tasks such as creating a table.
- Manage files on S3.
- Query data on S3 directly with Athena.
- Pandas DataFrame utility functions.
Installation
pip install red-panda
Using red-panda
Import red-panda and create an instance of RedPanda. If you create the instance with dryrun=True (i.e. rp = RedPanda(redshift_conf, s3_conf, dryrun=True)), red-panda will print the planned queries instead of executing them.
from red_panda import RedPanda
redshift_conf = {
"user": "awesome-developer",
"password": "strong-password",
"host": "awesome-domain.us-east-1.redshift.amazonaws.com",
"port": 5432,
"dbname": "awesome-db",
}
aws_conf = {
"aws_access_key_id": "your-aws-access-key-id",
"aws_secret_access_key": "your-aws-secret-access-key",
# "aws_session_token": "temporary-token-if-you-have-one",
}
rp = RedPanda(redshift_conf, aws_conf)
Load your Pandas DataFrame into Redshift as a new table.
import pandas as pd
df = pd.DataFrame(data={"col1": [1, 2], "col2": [3, 4]})
s3_bucket = "s3-bucket-name"
s3_path = "parent-folder/child-folder" # optional, if you don't have any sub folders
s3_file_name = "test.csv" # optional, randomly generated if not provided
rp.df_to_redshift(df, "test_table", bucket=s3_bucket, path=s3_path, append=False)
It is also possible to:
- Upload a DataFrame or flat file to S3.
- Delete files from S3.
- Load S3 data into Redshift.
- Unload a Redshift query result to S3.
- Obtain a Redshift query result as a DataFrame.
- Run any query on Redshift.
- Download S3 file to local.
- Read S3 file in memory as DataFrame.
- Run built-in Redshift admin queries, such as getting running query information.
- Use utility functions such as
create_tableto quickly create tables in Redshift. - Run queries against S3 data directly with Athena using
AthenaUtils. - Use features separately with
RedshiftUtils,S3Utils,AthenaUtils.
s3_key = s3_path + "/" + s3_file_name
# DataFrame uploaded to S3
rp.df_to_s3(df, s3_bucket, s3_key)
# Delete a file on S3
rp.delete_from_s3(s3_bucket, s3_key)
# Upload a local file to S3
pd.to_csv(df, "test_data.csv", index=False)
rp.file_to_s3("test_data.csv", s3_bucket, s3_key)
# Populate a Redshift table from S3 files
# Use a dictionary for column definition, here we minimally define only data_type
redshift_column_definition = {
"col1": {data_type: "int"},
"col2": {data_type: "int"},
}
rp.s3_to_redshift(
s3_bucket, s3_key, "test_table", column_definition=redshift_column_definition
)
# Unload Redshift query result to S3
sql = "select * from test_table"
rp.redshift_to_s3(sql, s3_bucket, s3_path+"/unload", prefix="unloadtest_")
# Obtain Redshift query result as a DataFrame
df = rp.redshift_to_df("select * from test_table")
# Run queries on Redshift
rp.run_query("create table test_table_copy as select * from test_table")
# Download S3 file to local
rp.s3_to_file(s3_bucket, s3_key, "local_file_name.csv")
# Read S3 file in memory as DataFrame
df = rp.s3_to_df(s3_bucket, s3_key, delimiter=",") # csv file in this example
# Since we are only going to use Redshift functionalities, we can just use RedshiftUtils
from red_panda.red_panda import RedshiftUtils
ru = RedshiftUtils(redshift_conf)
# Run built-in Redshift admin queries, such as getting running query information
load_errors = ru.get_load_error(as_df=True)
# Use utility functions such as create_table to quickly create tables in Redshift
ru.create_table("test_table", redshift_column_definition, sortkey=["col2"], drop_first=True)
For full API documentation, visit https://red-panda.readthedocs.io/en/latest/.
TODO
In no particular order:
- Support more data formats for copy. Currently only support delimited files.
- Support more data formats for s3 to df. Currently only support delimited files.
- Improve tests and docs.
- Better ways of inferring data types from dataframe to Redshift.
- Explore using
S3 Transfer Manager'supload_fileobjfordf_to_s3to take advantage of automatic multipart upload. - Add COPY from S3 manifest file, in addition to COPY from S3 source path.
- Support multi-cloud.
- Take advantage of Redshift slices for parallel processing. Split files for COPY.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file red-panda-1.0.2.tar.gz.
File metadata
- Download URL: red-panda-1.0.2.tar.gz
- Upload date:
- Size: 22.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.8.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec33379171c1ac665f597420c7b63a85c5ab9f1ef410e8f781a055bf4645a6ed
|
|
| MD5 |
1133e89465e4cb01767a2a44f9958237
|
|
| BLAKE2b-256 |
d3bad10935e9b3c4edbcde9a6b1703e155a3d12e6d1c9045a90978200780b5bd
|
File details
Details for the file red_panda-1.0.2-py3-none-any.whl.
File metadata
- Download URL: red_panda-1.0.2-py3-none-any.whl
- Upload date:
- Size: 23.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.50.2 CPython/3.8.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3875744a0ebd8547868c9f21b897585161c5c541e57682a67f99d16a5db11e33
|
|
| MD5 |
6d778cd4ed5dd0b3838b1a2b4a1c0b37
|
|
| BLAKE2b-256 |
1015bbd4d6f50b6cd701e370c45a86c2f5cf17a8a11560ebbb04ef0ae1d8d7c0
|