{{ DESCRIPTION }}
Project description
# AWS Extensions for datapackage-pipelines
## Install
```
# clone the repo and install it wit pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-aws.git
pip install -e .
```
## Usage
You can use datapackage-pipelines-aws as a plugin for (dpp)[https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines]. In pipeline-spec.yaml it will look like this
```yaml
...
- run: aws.to_s3
```
You will need AWS credentials to be set up. See (the guide to set up the credentials)[http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html]
### to_s3
Saves the DataPackage to AWS S3.
_Parameters:_
* `bucket` - Name of the bucket where DataPackage will be stored (should already be created!)
* `path` - Path (key/prefix) to the DataPackage. May contain format string available for `datapackage.json` Eg: `my/example/path/{owner}/{name}/{version}`
_Example:_
```yaml
datahub:
title: datahub-to-s3
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: aws.to_s3
parameters:
bucket: my.bucket.name
path: path/{owner}/{name}/{version}
-
run: aws.to_s3
parameters:
bucket: my.another.bucket
path: another/path/{version}
```
Executing pipeline above will save DataPackage in the following directories on S3:
* my.bucket.name/path/my-name/py-package-name/latest/...
* my.bucket.name/another/path/latest/...
## Install
```
# clone the repo and install it wit pip
git clone https://github.com/frictionlessdata/datapackage-pipelines-aws.git
pip install -e .
```
## Usage
You can use datapackage-pipelines-aws as a plugin for (dpp)[https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines]. In pipeline-spec.yaml it will look like this
```yaml
...
- run: aws.to_s3
```
You will need AWS credentials to be set up. See (the guide to set up the credentials)[http://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/credentials.html]
### to_s3
Saves the DataPackage to AWS S3.
_Parameters:_
* `bucket` - Name of the bucket where DataPackage will be stored (should already be created!)
* `path` - Path (key/prefix) to the DataPackage. May contain format string available for `datapackage.json` Eg: `my/example/path/{owner}/{name}/{version}`
_Example:_
```yaml
datahub:
title: datahub-to-s3
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: aws.to_s3
parameters:
bucket: my.bucket.name
path: path/{owner}/{name}/{version}
-
run: aws.to_s3
parameters:
bucket: my.another.bucket
path: another/path/{version}
```
Executing pipeline above will save DataPackage in the following directories on S3:
* my.bucket.name/path/my-name/py-package-name/latest/...
* my.bucket.name/another/path/latest/...
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for datapackage-pipelines-aws-0.0.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 391dbcf86f71ba2d7b7684fd437a6cddd6ed0eaf2b8f99c9b94b9ce0cb184cb0 |
|
MD5 | b0e61b8c513db6f8468e4e6917677c97 |
|
BLAKE2b-256 | ec3dbb375658a07f0ad83f753b5d2da503aee919d1422660153a93827a7bf243 |