Line stream s3 files into ~uniform lumps in S3
Project description
Read files from S3 as defined by a key prefix and map them by lines to a set of optionally gzip compressed output files in S3, with the output files limited by (pre-compressed) file size. The string “{}” in the output key will be substituted with the (zero-based) index of the output files.
s3lncoll: Line stream s3 files into ~uniform lumps in S3
Usage: s3lncoll {{arguments}} {{options}}
Arguments:
from [text] S3 URL prefix to clump
to [text] S3 URL for target clump ('{}' will be the count)
Options:
-h, --help Show this help message and exit
-H, --HELP Help for all sub-commands
-D, --debug Enable debug logging
-d, --delete Delete source files/keys
-j, --json Validate each line as JSONM
-q, --quiet Be quiet, be vewy vewy quiet
-V, --version Report installed version
-z, --compress Ccompress (gzip) the target(s)
-b, --blocksize [int] Maximum size of pre-compressed output files in bytes. (default: 1048576)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
s3lncoll-0.1.post10.tar.gz
(23.8 kB
view details)
File details
Details for the file s3lncoll-0.1.post10.tar.gz.
File metadata
- Download URL: s3lncoll-0.1.post10.tar.gz
- Upload date:
- Size: 23.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d0a1f0493fcab69149c584b0babecfe2886cd9e8809527ec289c226342b07701
|
|
| MD5 |
70bb49fa5c9a6c5c930ead7d4c38be7c
|
|
| BLAKE2b-256 |
9b0e5e2345754152f20e5583a6554273f0e7b50ee22d2e35a2ceb4a155b0a04f
|