{{ DESCRIPTION }}
Project description
# DataHub Extensions for datapackage-pipelines
## Install
```
pip install datapackage-pipelines-datahub
```
## Usage
You will need [DataHub Command Line tool](http://docs.datahub.io/publishers/cli/#installation) to be installed on you machine.
You can use datapackage-pipelines-datahub as a plugin for [dpp](https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines). In pipeline-spec.yaml it will look like this
```yaml
...
- run: datahub.dump.to_datahub
```
*Note: For pushing datasets to testing server set `DATAHUB_ENV=testing`*
### dump.to_datahub
publishes DataSet to [DataHub.io](http://next.datahub.io/)
Parameters:
* `config` - full path to the `config.json` file. Default: `~/.config/datahub/config.json`
* Alternatively you can just set `DATAHUB_JSON` environt variable to be equal to the path to the config file
* `findability` - Dataset visibility on the DataHub.io. One of `public` (default), `private`, `unlisted`.
* other `data push` related options. Eg: `schedule`, `name` etc... see `data push -h` for more.
Example:
```
datahub:
title: my-dataset
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: datahub.dump.to_datahub
parameters:
findability: private
schedule: every 2d
config: config/config.json.datahq
```
## Install
```
pip install datapackage-pipelines-datahub
```
## Usage
You will need [DataHub Command Line tool](http://docs.datahub.io/publishers/cli/#installation) to be installed on you machine.
You can use datapackage-pipelines-datahub as a plugin for [dpp](https://github.com/frictionlessdata/datapackage-pipelines#datapackage-pipelines). In pipeline-spec.yaml it will look like this
```yaml
...
- run: datahub.dump.to_datahub
```
*Note: For pushing datasets to testing server set `DATAHUB_ENV=testing`*
### dump.to_datahub
publishes DataSet to [DataHub.io](http://next.datahub.io/)
Parameters:
* `config` - full path to the `config.json` file. Default: `~/.config/datahub/config.json`
* Alternatively you can just set `DATAHUB_JSON` environt variable to be equal to the path to the config file
* `findability` - Dataset visibility on the DataHub.io. One of `public` (default), `private`, `unlisted`.
* other `data push` related options. Eg: `schedule`, `name` etc... see `data push -h` for more.
Example:
```
datahub:
title: my-dataset
pipeline:
-
run: load_metadata
parameters:
url: http://example.com/my-datapackage/datapackage.json
-
run: load_resource
parameters:
url: http://example.com/my-datapackage/datapackage.json
resource: my-resource
-
run: datahub.dump.to_datahub
parameters:
findability: private
schedule: every 2d
config: config/config.json.datahq
```
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for datapackage_pipelines_datahub-0.0.6-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9a627afeab27661dd35c7867aa6af278eedf0a83b0597aaf57b2f1ac1dc19a56 |
|
MD5 | a8dbbef8f731fca43fd5d89d7ed0fe9a |
|
BLAKE2b-256 | e5f99ef51cd6ad3764d2f22a4f19d0a17581722792a810185dba934dfd95dd4f |