Skip to main content

Utilities to work with Data Packages as defined on

Project description


[![Build Status](](
[![Windows Build Status](](
[![Test Coverage](](
![Support Python versions 2.7, 3.3, 3.4 and 3.5](

A model for working with [Data Packages].

[Data Packages]:

## Install

pip install datapackage

## Examples

### Reading a Data Package and its resource

import datapackage

dp = datapackage.DataPackage('')
brazil_gdp = [{'Year': int(row['Year']), 'Value': float(row['Value'])}
for row in dp.resources[0].data if row['Country Code'] == 'BRA']

max_gdp = max(brazil_gdp, key=lambda x: x['Value'])
min_gdp = min(brazil_gdp, key=lambda x: x['Value'])
percentual_increase = max_gdp['Value'] / min_gdp['Value']

msg = (
'The highest Brazilian GDP occured in {max_gdp_year}, when it peaked at US$ '
'{max_gdp:1,.0f}. This was {percentual_increase:1,.2f}% more than its '
'minimum GDP in {min_gdp_year}.'

# The highest Brazilian GDP occured in 2011, when it peaked at US$ 2,615,189,973,181. This was 172.44% more than its minimum GDP in 1960.

### Validating a Data Package

import datapackage

dp = datapackage.DataPackage('')
except datapackage.exceptions.ValidationError as e:
# Handle the ValidationError

### Retrieving all validation errors from a Data Package

import datapackage

# This metadata has two errors:
# * It has no "name", which is required;
# * Its resource has no "data", "path" or "url".
metadata = {
'resources': [

dp = datapackage.DataPackage(metadata)

for error in dp.iter_errors():
# Handle error

### Creating a Data Package

import datapackage

dp = datapackage.DataPackage()
dp.metadata['name'] = 'my_sleep_duration'
dp.metadata['resources'] = [
{'name': 'data'}

resource = dp.resources[0]
resource.metadata['data'] = [
7, 8, 5, 6, 9, 7, 8

with open('datapackage.json', 'w') as f:
# {"name": "my_sleep_duration", "resources": [{"data": [7, 8, 5, 6, 9, 7, 8], "name": "data"}]}

### Using a schema that's not in the local cache

import datapackage
import datapackage.registry

# This constant points to the official registry URL
# You can use any URL or path that points to a registry CSV
registry_url = datapackage.registry.Registry.DEFAULT_REGISTRY_URL
registry = datapackage.registry.Registry(registry_url)

metadata = {} # The datapackage.json file
schema = registry.get('tabular') # Change to your schema ID

dp = datapackage.DataPackage(metadata, schema)

### Push/pull Data Package to storage

Package provides `push_datapackage` and `pull_datapackage` utilities to
push and pull to/from storage.

This functionality requires `jsontableschema` storage plugin installed. See
section of `jsontableschema` docs for more information. Let's imagine
we have installed `jsontableschema-mystorage` (not a real name) plugin.

Then we could push and pull datapackage to/from the storage:

> All parameters should be used as keyword arguments.

from datapackage import push_datapackage, pull_datapackage

# Push
backend='mystorage', **<mystorage_options>)

# Import
descriptor='descriptor_path', name='datapackage_name',
backend='mystorage', **<mystorage_options>)

Options could be a SQLAlchemy engine or a BigQuery project and dataset name etc.
Detailed description you could find in a concrete plugin documentation.

See concrete examples in
section of `jsontableschema` docs.

## Developer notes

These notes are intended to help people that want to contribute to this
package itself. If you just want to use it, you can safely ignore them.

### Updating the local schemas cache

We cache the schemas from <>
using git-subtree. To update it, use:

git subtree pull --prefix datapackage/schemas master --squash

Project details

Release history Release notifications

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for datapackage, version 0.6.1
Filename, size File type Python version Upload date Hashes
Filename, size datapackage-0.6.1.tar.gz (20.9 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page