Skip to main content

Core data types used by OWID for managing data.

Project description

build status PyPI version

owid-catalog

A Pythonic API for working with OWID's data catalog.

Status: experimental, APIs likely to change

Quickstart

Install with pip install owid-catalog. Then you can begin exploring the experimental data catalog:

from owid import catalog

# look for Covid-19 data, return a data frame of matches
catalog.find('covid')

# load Covid-19 data from the Our World In Data namespace as a data frame
df = catalog.find('covid', namespace='owid').load()

# load data from other than the default `garden` channel
df = catalog.find('bp__energy', channel='open_numbers').load()

Development

You need Python 3.8+, poetry and make installed. Clone the repo, then you can simply run:

# run all unit tests and CI checks
make test

# watch for changes, then run all checks
make watch

Data types

Catalog

A catalog is an arbitrarily deep folder structure containing datasets inside. It can be local on disk, or remote.

Load the remote catalog

# find the default OWID catalog and fetch the catalog index over HTTPS
cat = RemoteCatalog()

# get a list of matching tables in different datasets
matches = cat.find('population')

# fetch a data frame for a specific match over HTTPS
t = cat.find_one('population', namespace='gapminder')

# load other channels than `garden`
cat = RemoteCatalog(channels=('garden', 'meadow', 'open_numbers'))

Datasets

A dataset is a folder of tables containing metadata about the overall collection.

  • Metadata about the dataset lives in index.json
  • All tables in the folder must share a common format (CSV or Feather)

Create a new dataset

# make a folder and an empty index.json file
ds = Dataset.create('/tmp/my_data')
# choose CSV instead of feather for files
ds = Dataset.create('/tmp/my_data', format='csv')

Add a table to a dataset

# serialize a table using the table's name and the dataset's default format (feather)
# (e.g. /tmp/my_data/my_table.feather)
ds.add(table)

Remove a table from a dataset

ds.remove('table_name')

Access a table

# load a table including metadata into memory
t = ds['my_table']

List tables

# the length is the number of datasets discovered on disk
assert len(ds) > 0
# iterate over the tables discovered on disk
for table in ds:
    do_something(table)

Add metadata

# you need to manually save your changes
ds.title = "Very Important Dataset"
ds.description = "This dataset is a composite of blah blah blah..."
ds.save()

Copy a dataset

# copying a dataset copies all its files to a new location
ds_new = ds.copy('/tmp/new_data_path')

# copying a dataset is identical to copying its folder, so this works too
shutil.copytree('/tmp/old_data', '/tmp/new_data_path')
ds_new = Dataset('/tmp/new_data_path')

Tables

Tables are essentially pandas DataFrames but with metadata. All operations on them occur in-memory, except for loading from and saving to disk. On disk, they are represented by tabular file (feather or CSV) and a JSON metadata file.

Columns of Table have attribute VariableMeta, including their type, description, and unit. Be carful when manipulating them, not all operations are currently supported. Supported are: adding a column, renaming columns. Not supported: direct assignment to t.columns = ... or to index names t.columns.index = ....

Make a new table

# same API as DataFrames
t = Table({
    'gdp': [1, 2, 3],
    'country': ['AU', 'SE', 'CH']
}).set_index('country')

Add metadata about the whole table

t.title = 'Very important data'

Add metadata about a field

t.gdp.description = 'GDP measured in 2011 international $'
t.sources = [
    Source(title='World Bank', url='https://www.worldbank.org/en/home')
]

Add metadata about all fields at once

# sources and licenses are actually stored a the field level
t.sources = [
    Source(title='World Bank', url='https://www.worldbank.org/en/home')
]
t.licenses = [
    License('CC-BY-SA-4.0', url='https://creativecommons.org/licenses/by-nc/4.0/')
]

Save a table to disk

# save to /tmp/my_table.feather + /tmp/my_table.meta.json
t.to_feather('/tmp/my_table.feather')

# save to /tmp/my_table.csv + /tmp/my_table.meta.json
t.to_csv('/tmp/my_table.csv')

Load a table from disk

These work like normal pandas DataFrames, but if there is also a my_table.meta.json file, then metadata will also get read. Otherwise it will be assumed that the data has no metadata:

t = Table.read_feather('/tmp/my_table.feather')

t = Table.read_csv('/tmp/my_table.csv')

Changelog

  • v0.2.7
    • Split datasets into channels (garden, meadow, open_numbers, ...) and make garden default one
    • Add .find_latest method to Catalog
  • v0.2.6
    • Add flag is_public for public/private datasets
    • Enforce snake_case for table, dataset and variable short names
    • Add fields published_by and published_at to Source
    • Added a list of supported and unsupported operations on columns
    • Updated pyarrow
  • v0.2.5
    • Fix ability to load remote CSV tables
  • v0.2.4
    • Update the default catalog URL to use a CDN
  • v0.2.3
    • Fix methods for finding and loading data from a LocalCatalog
  • v0.2.2
    • Repack frames to compact dtypes on Table.to_feather()
  • v0.2.1
    • Fix key typo used in version check
  • v0.2.0
    • Copy dataset metadata into tables, to make tables more traceable
    • Add API versioning, and a requirement to update if your version of this library is too old
  • v0.1.1
    • Add support for Python 3.8
  • v0.1.0
    • Initial release, including searching and fetching data from a remote catalog

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

owid-catalog-0.2.7.tar.gz (21.2 kB view details)

Uploaded Source

Built Distributions

owid_catalog-0.2.7-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

owid_catalog-0.2.7-1-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file owid-catalog-0.2.7.tar.gz.

File metadata

  • Download URL: owid-catalog-0.2.7.tar.gz
  • Upload date:
  • Size: 21.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.13 Darwin/21.4.0

File hashes

Hashes for owid-catalog-0.2.7.tar.gz
Algorithm Hash digest
SHA256 f1d3cf5d6551bec70e827af1ea6a7b284867d8596f1e34b6faa504ad7b0e7fad
MD5 727b69a800cb0d902bd805381adaa42d
BLAKE2b-256 0c25239c1a01d0b686a3a972d6254c986bfe7c7d6e37672c2ddce5e90ed8d39d

See more details on using hashes here.

File details

Details for the file owid_catalog-0.2.7-py3-none-any.whl.

File metadata

  • Download URL: owid_catalog-0.2.7-py3-none-any.whl
  • Upload date:
  • Size: 22.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.13 Darwin/21.4.0

File hashes

Hashes for owid_catalog-0.2.7-py3-none-any.whl
Algorithm Hash digest
SHA256 a88dde30b7ea3c8eb87356e8dccb76669b5b1323bf284279e06fe77cf8ee5afe
MD5 bb542ab7f64a8b8c419f6005987fbd9c
BLAKE2b-256 93ff7930202e08345bd72ff2b6282d8930ef3327b1cdc0745e128c2220690320

See more details on using hashes here.

File details

Details for the file owid_catalog-0.2.7-1-py3-none-any.whl.

File metadata

  • Download URL: owid_catalog-0.2.7-1-py3-none-any.whl
  • Upload date:
  • Size: 22.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.13 Darwin/21.4.0

File hashes

Hashes for owid_catalog-0.2.7-1-py3-none-any.whl
Algorithm Hash digest
SHA256 2bee95ab645dd6e8457a975f18e3333277d816e32ecb2da5499a6ece914f2d59
MD5 7c49a8b3c582909d395d1444cb57fe80
BLAKE2b-256 e5e69b622cf1350e086fabd66c9a0d351106aa0b90faf3d2793c30ac857f793b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page