Skip to main content

Utilities to work with Data Packages as defined on specs.frictionlessdata.io

Project description

datapackage-py

Travis Coveralls PyPi Gitter

A library for working with Data Packages.

Features

  • Package class for working with data packages
  • Resource class for working with data resources
  • Profile class for working with profiles
  • validate function for validating data package descriptors
  • infer function for inferring data package descriptors

Contents

Getting Started

Installation

The package use semantic versioning. It means that major versions could include breaking changes. It's highly recommended to specify datapackage version range in your setup/requirements file e.g. datapackage>=1.0,<2.0.

$ pip install datapackage

OSX 10.14+

If you receive an error about the cchardet package when installing datapackage on Mac OSX 10.14 (Mojave) or higher, follow these steps:

  1. Make sure you have the latest x-code by running the following in terminal: xcode-select --install
  2. Then go to https://developer.apple.com/download/more/ and download the command line tools. Note, this requires an Apple ID.
  3. Then, in terminal, run open /Library/Developer/CommandLineTools/Packages/macOS_SDK_headers_for_macOS_10.14.pkg You can read more about these steps in this post.

Examples

Code examples in this readme requires Python 3.3+ interpreter. You could see even more example in examples directory.

from datapackage import Package

package = Package('datapackage.json')
package.get_resource('resource').read()

Documentation

Package

A class for working with data packages. It provides various capabilities like loading local or remote data package, inferring a data package descriptor, saving a data package descriptor and many more.

Consider we have some local csv files in a data directory. Let's create a data package based on this data using a Package class:

data/cities.csv

city,location
london,"51.50,-0.11"
paris,"48.85,2.30"
rome,"41.89,12.51"

data/population.csv

city,year,population
london,2017,8780000
paris,2017,2240000
rome,2017,2860000

First we create a blank data package:

package = Package()

Now we're ready to infer a data package descriptor based on data files we have. Because we have two csv files we use glob pattern **/*.csv:

package.infer('**/*.csv')
package.descriptor
#{ profile: 'tabular-data-package',
#  resources:
#   [ { path: 'data/cities.csv',
#       profile: 'tabular-data-resource',
#       encoding: 'utf-8',
#       name: 'cities',
#       format: 'csv',
#       mediatype: 'text/csv',
#       schema: [Object] },
#     { path: 'data/population.csv',
#       profile: 'tabular-data-resource',
#       encoding: 'utf-8',
#       name: 'population',
#       format: 'csv',
#       mediatype: 'text/csv',
#       schema: [Object] } ] }

An infer method has found all our files and inspected it to extract useful metadata like profile, encoding, format, Table Schema etc. Let's tweak it a little bit:

package.descriptor['resources'][1]['schema']['fields'][1]['type'] = 'year'
package.commit()
package.valid # true

Because our resources are tabular we could read it as a tabular data:

package.get_resource('population').read(keyed=True)
#[ { city: 'london', year: 2017, population: 8780000 },
#  { city: 'paris', year: 2017, population: 2240000 },
#  { city: 'rome', year: 2017, population: 2860000 } ]

Let's save our descriptor on the disk as a zip-file:

package.save('datapackage.zip')

To continue the work with the data package we just load it again but this time using local datapackage.zip:

package = Package('datapackage.zip')
# Continue the work

It was onle basic introduction to the Package class. To learn more let's take a look on Package class API reference.

Package(descriptor=None, base_path=None, strict=False, storage=None, **options)

Constructor to instantiate Package class.

  • descriptor (str/dict) - data package descriptor as local path, url or object
  • base_path (str) - base path for all relative paths
  • strict (bool) - strict flag to alter validation behavior. Setting it to True leads to throwing errors on any operation with invalid descriptor
  • storage (str/tableschema.Storage) - storage name like sql or storage instance
  • options (dict) - storage options to use for storage creation
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (Package) - returns data package class instance

package.valid

  • (bool) - returns validation status. It always true in strict mode.

package.errors

  • (Exception[]) - returns validation errors. It always empty in strict mode.

package.profile

  • (Profile) - returns an instance of Profile class (see below).

package.descriptor

  • (dict) - returns data package descriptor

package.base_path

  • (str/None) - returns the data package base path

package.resources

  • (Resource[]) - returns an array of Resource instances (see below).

package.resource_names

  • (str[]) - returns an array of resource names.

package.get_resource(name)

Get data package resource by name.

  • name (str) - data resource name
  • (Resource/None) - returns Resource instances or null if not found

package.add_resource(descriptor)

Add new resource to data package. The data package descriptor will be validated with newly added resource descriptor.

  • descriptor (dict) - data resource descriptor
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (Resource/None) - returns added Resource instance or null if not added

package.remove_resource(name)

Remove data package resource by name. The data package descriptor will be validated after resource descriptor removal.

  • name (str) - data resource name
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (Resource/None) - returns removed Resource instances or null if not found

package.get_group(name)

Returns a group of tabular resources by name. For more information about groups see Group.

  • name (str) - name of a group of resources
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (Group/None) - returns a Group instance or null if not found

package.infer(pattern=False)

Argument pattern works only for local files

Infer a data package metadata. If pattern is not provided only existent resources will be inferred (added metadata like encoding, profile etc). If pattern is provided new resoures with file names mathing the pattern will be added and inferred. It commits changes to data package instance.

  • pattern (str) - glob pattern for new resources
  • (dict) - returns data package descriptor

package.commit(strict=None)

Update data package instance if there are in-place changes in the descriptor.

  • strict (bool) - alter strict mode for further work
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (bool) - returns true on success and false if not modified
package = Package({
    'name': 'package',
    'resources': [{'name': 'resource', 'data': ['data']}]
})

package.name # package
package.descriptor['name'] = 'renamed-package'
package.name # package
package.commit()
package.name # renamed-package

package.save(target=None, storage=None, merge_groups=False, **options)

Saves this data package to storage if storage argument is passed or saves this data package's descriptor to json file if target arguments ends with .json or saves this data package to zip file otherwise.

  • target (string/filelike) - the file path or a file-like object where the contents of this Data Package will be saved into.
  • storage (str/tableschema.Storage) - storage name like sql or storage instance
  • merge_groups (bool) - save all the group's tabular resoruces into one bucket if a storage is provided (for example into one SQL table). Read more about Group.
  • options (dict) - storage options to use for storage creation
  • (exceptions.DataPackageException) - raises if there was some error writing the package
  • (bool) - return true on success

It creates a zip file into file_or_path with the contents of this Data Package and its resources. Every resource which content lives in the local filesystem will be copied to the zip file. Consider the following Data Package descriptor:

{
    "name": "gdp",
    "resources": [
        {"name": "local", "format": "CSV", "path": "data.csv"},
        {"name": "inline", "data": [4, 8, 15, 16, 23, 42]},
        {"name": "remote", "url": "http://someplace.com/data.csv"}
    ]
}

The final structure of the zip file will be:

./datapackage.json
./data/local.csv

With the contents of datapackage.json being the same as returned datapackage.descriptor. The resources' file names are generated based on their name and format fields if they exist. If the resource has no name, it'll be used resource-X, where X is the index of the resource in the resources list (starting at zero). If the resource has format, it'll be lowercased and appended to the name, becoming "name.format".

Resource

A class for working with data resources. You can read or iterate tabular resources using the iter/read methods and all resource as bytes using row_iter/row_read methods.

Consider we have some local csv file. It could be inline data or remote link - all supported by Resource class (except local files for in-brower usage of course). But say it's data.csv for now:

city,location
london,"51.50,-0.11"
paris,"48.85,2.30"
rome,N/A

Let's create and read a resource. Because resource is tabular we could use resource.read method with a keyed option to get an array of keyed rows:

resource = Resource({path: 'data.csv'})
resource.tabular # true
resource.read(keyed=True)
# [
#   {city: 'london', location: '51.50,-0.11'},
#   {city: 'paris', location: '48.85,2.30'},
#   {city: 'rome', location: 'N/A'},
# ]
resource.headers
# ['city', 'location']
# (reading has to be started first)

As we could see our locations are just a strings. But it should be geopoints. Also Rome's location is not available but it's also just a N/A string instead of Python None. First we have to infer resource metadata:

resource.infer()
resource.descriptor
#{ path: 'data.csv',
#  profile: 'tabular-data-resource',
#  encoding: 'utf-8',
#  name: 'data',
#  format: 'csv',
#  mediatype: 'text/csv',
# schema: { fields: [ [Object], [Object] ], missingValues: [ '' ] } }
resource.read(keyed=True)
# Fails with a data validation error

Let's fix not available location. There is a missingValues property in Table Schema specification. As a first try we set missingValues to N/A in resource.descriptor.schema. Resource descriptor could be changed in-place but all changes should be commited by resource.commit():

resource.descriptor['schema']['missingValues'] = 'N/A'
resource.commit()
resource.valid # False
resource.errors
# [<ValidationError: "'N/A' is not of type 'array'">]

As a good citiziens we've decided to check out recource descriptor validity. And it's not valid! We should use an array for missingValues property. Also don't forget to have an empty string as a missing value:

resource.descriptor['schema']['missingValues'] = ['', 'N/A']
resource.commit()
resource.valid # true

All good. It looks like we're ready to read our data again:

resource.read(keyed=True)
# [
#   {city: 'london', location: [51.50,-0.11]},
#   {city: 'paris', location: [48.85,2.30]},
#   {city: 'rome', location: null},
# ]

Now we see that:

  • locations are arrays with numeric lattide and longitude
  • Rome's location is a native JavaScript null

And because there are no errors on data reading we could be sure that our data is valid againt our schema. Let's save our resource descriptor:

resource.save('dataresource.json')

Let's check newly-crated dataresource.json. It contains path to our data file, inferred metadata and our missingValues tweak:

{
    "path": "data.csv",
    "profile": "tabular-data-resource",
    "encoding": "utf-8",
    "name": "data",
    "format": "csv",
    "mediatype": "text/csv",
    "schema": {
        "fields": [
            {
                "name": "city",
                "type": "string",
                "format": "default"
            },
            {
                "name": "location",
                "type": "geopoint",
                "format": "default"
            }
        ],
        "missingValues": [
            "",
            "N/A"
        ]
    }
}

If we decide to improve it even more we could update the dataresource.json file and then open it again using local file name:

resource = Resource('dataresource.json')
# Continue the work

It was onle basic introduction to the Resource class. To learn more let's take a look on Resource class API reference.

Resource(descriptor={}, base_path=None, strict=False, storage=None, **options)

Constructor to instantiate Resource class.

  • descriptor (str/dict) - data resource descriptor as local path, url or object
  • base_path (str) - base path for all relative paths
  • strict (bool) - strict flag to alter validation behavior. Setting it to true leads to throwing errors on any operation with invalid descriptor
  • storage (str/tableschema.Storage) - storage name like sql or storage instance
  • options (dict) - storage options to use for storage creation
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (Resource) - returns resource class instance

resource.valid

  • (bool) - returns validation status. It always true in strict mode.

resource.errors

  • (Exception[]) - returns validation errors. It always empty in strict mode.

resource.profile

  • (Profile) - returns an instance of Profile class (see below).

resource.descriptor

  • (dict) - returns resource descriptor

resource.name

  • (str) - returns resource name

resource.inline

  • (bool) - returns true if resource is inline

resource.local

  • (bool) - returns true if resource is local

resource.remote

  • (bool) - returns true if resource is remote

resource.multipart

  • (bool) - returns true if resource is multipart

resource.tabular

  • (bool) - returns true if resource is tabular

resource.source

  • (list/str) - returns data or path property

Combination of resource.source and resource.inline/local/remote/multipart provides predictable interface to work with resource data.

resource.headers

Only for tabular resources (reading has to be started first or it will return None)

  • (str[]) - returns data source headers

resource.schema

Only for tabular resources

For tabular resources it returns Schema instance to interact with data schema. Read API documentation - tableschema.Schema.

  • (tableschema.Schema) - returns schema class instance

resource.iter(keyed=False, extended=False, cast=True, relations=False)

Only for tabular resources

Iter through the table data and emits rows cast based on table schema (async for loop). Data casting could be disabled.

  • keyed (bool) - iter keyed rows
  • extended (bool) - iter extended rows
  • cast (bool) - disable data casting if false
  • relations (bool) - if true foreign key fields will be checked and resolved to its references
  • (exceptions.DataPackageException) - raises any error occured in this process
  • (any[]/any{}) - yields rows:
    • [value1, value2] - base
    • {header1: value1, header2: value2} - keyed
    • [rowNumber, [header1, header2], [value1, value2]] - extended

resource.read(keyed=False, extended=False, cast=True, relations=False, limit=None)

Only for tabular resources

Read the whole table and returns as array of rows. Count of rows could be limited.

  • keyed (bool) - flag to emit keyed rows
  • extended (bool) - flag to emit extended rows
  • cast (bool) - flag to disable data casting if false
  • relations (bool) - if true foreign key fields will be checked and resolved to its references
  • limit (int) - integer limit of rows to return
  • (exceptions.DataPackageException) - raises any error occured in this process
  • (list[]) - returns array of rows (see table.iter)

resource.check_relations()

Only for tabular resources

It checks foreign keys and raises an exception if there are integrity issues.

  • (exceptions.RelationError) - raises if there are integrity issues
  • (bool) - returns True if no issues

resource.raw_iter(stream=False)

Iterate over data chunks as bytes. If stream is true File-like object will be returned.

  • stream (bool) - File-like object will be returned
  • (bytes[]/filelike) - returns bytes[]/filelike

resource.raw_read()

Returns resource data as bytes.

  • (bytes) - returns resource data in bytes

resource.infer(**options)

Infer resource metadata like name, format, mediatype, encoding, schema and profile. It commits this changes into resource instance.

  • options - options will be passed to tableschema.infer call, for more control on results (e.g. for setting limit, confidence etc.).

  • (dict) - returns resource descriptor

resource.commit(strict=None)

Update resource instance if there are in-place changes in the descriptor.

  • strict (bool) - alter strict mode for further work
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (bool) - returns true on success and false if not modified

resource.save(target, storage=None, **options)

Saves this resource into storage if storage argument is passed or saves this resource's descriptor to json file otherwise.

  • target (str) - path where to save a resource
  • storage (str/tableschema.Storage) - storage name like sql or storage instance
  • options (dict) - storage options to use for storage creation
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (bool) - returns true on success

Group

A class representing a group of tabular resources. Groups can be used to read multiple resource as one or to export them, for example, to a database as one table. To define a group add the group: <name> field to corresponding resources. The group's metadata will be created from the "leading" resource's metadata (the first resource with the group name).

Consider we have a data package with two tables partitioned by a year and a shared schema stored separately:

cars-2017.csv

name,value
bmw,2017
tesla,2017
nissan,2017

cars-2018.csv

name,value
bmw,2018
tesla,2018
nissan,2018

cars.schema.json

{
    "fields": [
        {
            "name": "name",
            "type": "string"
        },
        {
            "name": "value",
            "type": "integer"
        }
    ]
}

datapackage.json

{
    "name": "datapackage",
    "resources": [
        {
            "group": "cars",
            "name": "cars-2017",
            "path": "cars-2017.csv",
            "profile": "tabular-data-resource",
            "schema": "cars.schema.json"
        },
        {
            "group": "cars",
            "name": "cars-2018",
            "path": "cars-2018.csv",
            "profile": "tabular-data-resource",
            "schema": "cars.schema.json"
        }
    ]
}

Let's read the resources separately:

package = Package('datapackage.json')
package.get_resource('cars-2017').read(keyed=True) == [
    {'name': 'bmw', 'value': 2017},
    {'name': 'tesla', 'value': 2017},
    {'name': 'nissan', 'value': 2017},
]
package.get_resource('cars-2018').read(keyed=True) == [
    {'name': 'bmw', 'value': 2018},
    {'name': 'tesla', 'value': 2018},
    {'name': 'nissan', 'value': 2018},
]

On the other hand, these resources defined with a group: cars field. It means we can treat them as a group:

package = Package('datapackage.json')
package.get_group('cars').read(keyed=True) == [
    {'name': 'bmw', 'value': 2017},
    {'name': 'tesla', 'value': 2017},
    {'name': 'nissan', 'value': 2017},
    {'name': 'bmw', 'value': 2018},
    {'name': 'tesla', 'value': 2018},
    {'name': 'nissan', 'value': 2018},
]

We can use this approach when we need to save the data package to a storage, for example, to a SQL database. There is the merge_groups flag to enable groupping behaviour:

package = Package('datapackage.json')
package.save(storage='sql', engine=engine)
# SQL tables:
# - cars-2017
# - cars-2018
package.save(storage='sql', engine=engine, merge_groups=True)
# SQL tables:
# - cars

Group

This class doesn't have any public constructor. Use package.get_group.

group.name

  • (str) - returns the group name

group.headers

The same as resource.headers

group.schema

The same as resource.schema

group.iter(...)

The same as resource.iter

group.read(...)

The same as resource.read

Profile

A component to represent JSON Schema profile from Profiles Registry:

profile = Profile('data-package')

profile.name # data-package
profile.jsonschema # JSON Schema contents

try:
   valid = profile.validate(descriptor)
except exceptions.ValidationError as exception:
   for error in exception.errors:
       # handle individual error

Profile(profile)

Constuctor to instantiate Profile class.

  • profile (str) - profile name in registry or URL to JSON Schema
  • (exceptions.DataPackageException) - raises error if something goes wrong
  • (Profile) - returns profile class instance

profile.name

  • (str/None) - returns profile name if available

profile.jsonschema

  • (dict) - returns profile JSON Schema contents

profile.validate(descriptor)

Validate a data package descriptor against the profile.

  • descriptor (dict) - retrieved and dereferenced data package descriptor
  • (exceptions.ValidationError) - raises if not valid
  • (bool) - returns True if valid

validate

A standalone function to validate a data package descriptor:

from datapackage import validate, exceptions

try:
    valid = validate(descriptor)
except exceptions.ValidationError as exception:
   for error in exception.errors:
       # handle individual error

validate(descriptor)

Validate a data package descriptor.

  • descriptor (str/dict) - package descriptor (one of):
    • local path
    • remote url
    • object
  • (exceptions.ValidationError) - raises on invalid
  • (bool) - returns true on valid

infer

A standalone function to infer a data package descriptor.

descriptor = infer('**/*.csv')
#{ profile: 'tabular-data-resource',
#  resources:
#   [ { path: 'data/cities.csv',
#       profile: 'tabular-data-resource',
#       encoding: 'utf-8',
#       name: 'cities',
#       format: 'csv',
#       mediatype: 'text/csv',
#       schema: [Object] },
#     { path: 'data/population.csv',
#       profile: 'tabular-data-resource',
#       encoding: 'utf-8',
#       name: 'population',
#       format: 'csv',
#       mediatype: 'text/csv',
#       schema: [Object] } ] }

infer(pattern, base_path=None)

Argument pattern works only for local files

Infer a data package descriptor.

  • pattern (str) - glob file pattern
  • (dict) - returns data package descriptor

Foreign Keys

The library supports foreign keys described in the Table Schema specification. It means if your data package descriptor use resources[].schema.foreignKeys property for some resources a data integrity will be checked on reading operations.

Consider we have a data package:

DESCRIPTOR = {
  'resources': [
    {
      'name': 'teams',
      'data': [
        ['id', 'name', 'city'],
        ['1', 'Arsenal', 'London'],
        ['2', 'Real', 'Madrid'],
        ['3', 'Bayern', 'Munich'],
      ],
      'schema': {
        'fields': [
          {'name': 'id', 'type': 'integer'},
          {'name': 'name', 'type': 'string'},
          {'name': 'city', 'type': 'string'},
        ],
        'foreignKeys': [
          {
            'fields': 'city',
            'reference': {'resource': 'cities', 'fields': 'name'},
          },
        ],
      },
    }, {
      'name': 'cities',
      'data': [
        ['name', 'country'],
        ['London', 'England'],
        ['Madrid', 'Spain'],
      ],
    },
  ],
}

Let's check relations for a teams resource:

from datapackage import Package

package = Package(DESCRIPTOR)
teams = package.get_resource('teams')
teams.check_relations()
# tableschema.exceptions.RelationError: Foreign key "['city']" violation in row "4"

As we could see there is a foreign key violation. That's because our lookup table cities doesn't have a city of Munich but we have a team from there. We need to fix it in cities resource:

package.descriptor['resources'][1]['data'].append(['Munich', 'Germany'])
package.commit()
teams = package.get_resource('teams')
teams.check_relations()
# True

Fixed! But not only a check operation is available. We could use relations argument for resource.iter/read methods to dereference a resource relations:

teams.read(keyed=True, relations=True)
#[{'id': 1, 'name': 'Arsenal', 'city': {'name': 'London', 'country': 'England}},
# {'id': 2, 'name': 'Real', 'city': {'name': 'Madrid', 'country': 'Spain}},
# {'id': 3, 'name': 'Bayern', 'city': {'name': 'Munich', 'country': 'Germany}}]

Instead of plain city name we've got a dictionary containing a city data. These resource.iter/read methods will fail with the same as resource.check_relations error if there is an integrity issue. But only if relations=True flag is passed.

Exceptions

exceptions.DataPackageException

Base class for all library exceptions. If there are multiple errors it could be read from an exceptions object:

try:
    # lib action
except exceptions.DataPackageException as exception:
    if exception.multiple:
        for error in exception.errors:
            # handle error

exceptions.LoadError

All loading errors.

exceptions.ValidationError

All validation errors.

exceptions.CastError

All value cast errors.

exceptions.RelationError

All integrity errors.

exceptions.StorageError

All storage errors.

CLI

It's a provisional API. If you use it as a part of other program please pin concrete datapackage version to your requirements file.

The library ships with a simple CLI:

$ datapackage infer '**/*.csv'
Data package descriptor:
{'profile': 'tabular-data-package',
 'resources': [{'encoding': 'utf-8',
                'format': 'csv',
                'mediatype': 'text/csv',
                'name': 'data',
                'path': 'data/datapackage/data.csv',
                'profile': 'tabular-data-resource',
                'schema': ...}}]}

$ datapackage

Usage: cli.py [OPTIONS] COMMAND [ARGS]...

Options:
  --version  Show the version and exit.
  --help     Show this message and exit.

Commands:
  infer
  validate

Notes

Accessing data behind a proxy server

Before the package = Package("https://xxx.json") call set these environment variables:

import os

os.environ["HTTP_PROXY"] = 'xxx'
os.environ["HTTPS_PROXY"] = 'xxx'

Contributing

The project follows the Open Knowledge International coding standards.

Recommended way to get started is to create and activate a project virtual environment. To install package and development dependencies into active environment:

$ make install

To run tests with linting and coverage:

$ make test

For linting pylama configured in pylama.ini is used. On this stage it's already installed into your environment and could be used separately with more fine-grained control as described in documentation - https://pylama.readthedocs.io/en/latest/.

For example to sort results by error type:

$ pylama --sort <path>

For testing tox configured in tox.ini is used. It's already installed into your environment and could be used separately with more fine-grained control as described in documentation - https://testrun.org/tox/latest/.

For example to check subset of tests against Python 2 environment with increased verbosity. All positional arguments and options after -- will be passed to py.test:

tox -e py27 -- -v tests/<path>

Under the hood tox uses pytest configured in pytest.ini, coverage and mock packages. This packages are available only in tox envionments.

Here is a list of the library contributors:

Changelog

Here described only breaking and the most important changes. The full changelog and documentation for all released versions could be found in nicely formatted commit history.

v1.8

v1.7

v1.6

  • Added support for custom request session

v1.5

Updated behaviour:

  • Added support for Python 3.7

v1.4

New API added:

  • added skip_rows support to the resource descriptor

v1.3

New API added:

  • property package.base_path is now publicly available

v1.2

Updated behaviour:

  • CLI command $ datapackage infer now outputs only a JSON-formatted data package descriptor.

v1.1

New API added:

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datapackage-1.8.0.tar.gz (87.0 kB view details)

Uploaded Source

Built Distribution

datapackage-1.8.0-py2.py3-none-any.whl (78.0 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file datapackage-1.8.0.tar.gz.

File metadata

  • Download URL: datapackage-1.8.0.tar.gz
  • Upload date:
  • Size: 87.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.35.0 CPython/2.7.15

File hashes

Hashes for datapackage-1.8.0.tar.gz
Algorithm Hash digest
SHA256 7afb4a22910a04eca5ed5aa69ae46b45650ab00cb2648007669f454cc1de7b30
MD5 172b0be242895c7d0fbf10458357fb78
BLAKE2b-256 25f127d6e21dd88ee3c94bdb425eb9289813d11a441fe55de4f5af57c06b72f6

See more details on using hashes here.

File details

Details for the file datapackage-1.8.0-py2.py3-none-any.whl.

File metadata

  • Download URL: datapackage-1.8.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 78.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.35.0 CPython/2.7.15

File hashes

Hashes for datapackage-1.8.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 78bdec394811642d16bd4a973c7764d14158b56f2e1c7c87ff5f1acaa6770214
MD5 941781795d2d599733fa84dee97ef459
BLAKE2b-256 06d454d4af3dcd88c553ca4462ca828dc394a6111be1960e85a18c51a7122586

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page