Skip to main content

Python library implementing a CLDF workbench

Project description

cldfbench

Tooling to create CLDF datasets from existing data.

Build Status Documentation Status PyPI

Overview

This package provides tools to curate cross-linguistic data, with the goal of packaging it as CLDF datasets.

In particular, it supports a workflow where:

  • "raw" source data is downloaded to a raw/ subdirectory,
  • and subsequently converted to one or more CLDF datasets in a cldf/ subdirectory, with the help of:
    • configuration data in a etc/ directory and
    • custom Python code (a subclass of cldfbench.Dataset which implements the workflow actions).

This workflow is supported via:

  • a commandline interface cldfbench which calls the workflow actions as subcommands,
  • a cldfbench.Dataset base class, which must be overwritten in a custom module to hook custom code into the workflow.

With this workflow and the separation of the data into three directories we want to provide a workbench for transparently deriving CLDF data from data that has been published before. In particular we want to delineate clearly:

  • what forms part of the original or source data (raw),
  • what kind of information is added by the curators of the CLDF dataset (etc)
  • and what data was derived using the workbench (cldf).

Further reading

This paper introduces cldfbench and uses an extended, real-world example:

Forkel, R., & List, J.-M. (2020). CLDFBench: Give your cross-linguistic data a lift. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, et al. (Eds.), Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) (pp. 6995-7002). Paris: European Language Resources Association (ELRA). [PDF]

Installation

cldfbench can be installed via pip - preferably in a virtual environment - by running:

pip install cldfbench

cldfbench provides some functionality that relies on python packages which are not needed for the core functionality. These are specified as extras and can be installed using syntax like:

pip install cldfbench[<extras>]

where <extras> is a comma-separated list of names from the following list:

The command line interface cldfbench

Installing the python package will also install a command cldfbench available on the command line:

$ cldfbench -h
usage: cldfbench [-h] [--log-level LOG_LEVEL] COMMAND ...

optional arguments:
  -h, --help            show this help message and exit
  --log-level LOG_LEVEL
                        log level [ERROR|WARN|INFO|DEBUG] (default: 20)

available commands:
  Run "COMAMND -h" to get help for a specific command.

  COMMAND
    check               Run generic CLDF checks
    ...

As shown above, run cldfbench -h to get help, and cldfbench COMMAND -h to get help on individual subcommands, e.g. cldfbench new -h to read about the usage of the new subcommand.

Dataset discovery

Most cldfbench commands operate on an existing dataset (unlike new, which creates a new one). Datasets can be discovered in two ways:

  1. Via the python module (i.e. the *.py file, containing the Dataset subclass). To use this mode of discovery, pass the path to the python module as DATASET argument, when required by a command.

  2. Via entry point and dataset ID. To use this mode, specify the name of the entry point as value of the --entry-point option (or use the default name cldfbench.dataset) and the Dataset.id as DATASET argument.

Discovery via entry point is particularly useful for commands that can operate on multiple datasets. To select all datasets advertising a given entry point, pass "_" (i.e. an underscore) as DATASET argument.

Workflow

For a full example of the cldfbench curation workflow, see the tutorial.

Creating a skeleton for a new dataset directory

A directory containing stub entries for a dataset can be created running

cldfbench new

This will create the following layout (where <ID> stands for the chosen dataset ID):

<ID>/
├── cldf               # A stub directory for the CLDF data
│   └── README.md
├── cldfbench_<ID>.py  # The python module, providing the Dataset subclass
├── etc                # A stub directory for the configuration data
│   └── README.md
├── metadata.json      # The metadata provided to the subcommand serialized as JSON
├── raw                # A stub directory for the raw data
│   └── README.md
├── setup.cfg          # Python setup config, providing defaults for test integration
├── setup.py           # Python setup file, making the dataset "installable" 
├── test.py            # The python code to run for dataset validation
└── .github            # Integrate the validation with GitHub actions

Implementing CLDF creation

cldfbench provides tools to make CLDF creation simple. Still, each dataset is different, and so each dataset will have to provide its own custom code to do so. This custom code goes into the cmd_makecldf method of the Dataset subclass in the dataset's python module. (See also the API documentation of cldfbench.Dataset.)

Typically, this code will make use of one or more cldfbench.CLDFSpec instances, which describes what kind of CLDF to create. A CLDFSpec also gives access to a cldfbench.CLDFWriter instance, which wraps a pycldf.Dataset.

The main interfaces to these objects are:

  • cldfbench.Dataset.cldf_specs: a method returning specifications of all CLDF datasets that are created by the dataset,
  • cldfbench.Dataset.cldf_writer: a method returning an initialized CLDFWriter associated with a particular CLDFSpec.

cldfbench supports several scenarios of CLDF creation:

  • The typical use case is turning raw data into a single CLDF dataset. This would require instantiating one CLDFWriter writer in the cmd_makecldf method, and the defaults of CLDFSpec will probably be ok. Since this is the most common and simplest case, it is supported with some extra "sugar": The initialized CLDFWriter is available as args.writer when cmd_makecldf is called.
  • But it is also possible to create multiple CLDF datasets:
    • For a dataset containing both, lexical and typological data, it may be appropriate to create a Ẁordlist and a StructureDataset. To do so, one would have to call cldf_writer twice, passing in an approriate CLDFSpec. Note that if both CLDF datasets are created in the same directory, they can share the LanguageTable - but would have to specify distinct file names for the ParameterTable, passing distinct values to CLDFSpec.data_fnames.
    • When creating multiple datasets of the same CLDF module, e.g. to split a large dataset into smaller chunks, care must be taken to also disambiguate the name of the metadata file, passing distinct values to CLDFSpec.metadata_fname.

When creating CLDF, it is also often useful to have standard reference catalogs accessible, in particular Glottolog. See the section on Catalogs for a description of how this is supported by cldfbench.

Catalogs

Linking data to reference catalogs is a major goal of CLDF, thus cldfbench provides tools to make catalog access and maintenance easier. Catalog data must be accessible in local clones of the data repository. cldfbench provides commands:

  • catconfig to create the clones and make them known through a configuration file,
  • catinfo to get an overview of the installed catalogs and their versions,
  • catupdate to update local clones from the upstream repositories.

See:

for a list of reference catalogs which are currently supported in cldfbench.

Note: Cloning glottolog/glottolog - due to the deeply nested directories of the language classification - results in long path names. On Windows this may require disabling the maximum path length limitation.

Curating a dataset on GitHub

One of the design goals of CLDF was to specify a data format that plays well with version control. Thus, it's natural - and actually recommended - to curate a CLDF dataset in a version controlled repository. The most popular way to do this in a collaborative fashion is by using a git repository hosted on GitHub.

The directory layout supported by cldfbench caters to this use case in several ways:

  • Each directory contains a file README.md, which will be rendered as human readable description when browsing the repository at GitHub.
  • The file .travis.yml contains the configuration for hooking up a repository with Travis CI, to provide continuous consistency checking of the data.

Archiving a dataset with Zenodo

Curating a dataset on GitHub also provides a simple way to archiving and publishing released versions of the data. You can hook up your repository with Zenodo (following this guide). Then, Zenodo will pick up any released package, assign a DOI to it, archive it and make it accessible in the long-term.

Some notes:

  • Hook-up with Zenodo requires the repository to be public (not private).
  • You should consider using an institutional account on GitHub and Zenodo to associate the repository with. Currently, only the user account registering a repository on Zenodo can change any metadata of releases lateron.
  • Once released and archived with Zenodo, it's a good idea to add the DOI assigned by Zenodo to the release description on GitHub.
  • To make sure a release is picked up by Zenodo, the version number must start with a letter, e.g. "v1.0" - not "1.0".

Thus, with a setup as described here, you can make sure you create FAIR data.

Extending cldfbench

cldfbench can be extended or built-upon in various ways - typically by customizing core functionality in new python packages. To support particular types of raw data, you might want a custom Dataset class, or to support a particular type of CLDF data, you would customize CLDFWriter.

In addition to extending cldfbench using the standard methods of object-oriented programming, there are two more ways of extending cldfbench: Commands and dataset templates. Both are implemented using entry ponits. So packages which provide custom commands or dataset templates must declare these in metadata that is made known to other Python packages (in particular the cldfbench package) upon installation.

Commands

A python package (or a dataset) can provide additional subcommands to be run from cldfbench. For more info see the commands.README.

Custom dataset templates

A python package can provide alternative dataset templates to be run with cldfbench new. Such templates are implemented by:

  • a subclass of cldfbench.Template,
  • which is advertised using an entry point cldfbench.scaffold:
    entry_points={
        'cldfbench.scaffold': [
            'template_name=mypackage.scaffold:DerivedTemplate',
        ],
    },

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cldfbench-1.14.0.tar.gz (54.7 kB view details)

Uploaded Source

Built Distribution

cldfbench-1.14.0-py3-none-any.whl (56.0 kB view details)

Uploaded Python 3

File details

Details for the file cldfbench-1.14.0.tar.gz.

File metadata

  • Download URL: cldfbench-1.14.0.tar.gz
  • Upload date:
  • Size: 54.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for cldfbench-1.14.0.tar.gz
Algorithm Hash digest
SHA256 fbb8663b5d2f073099cf0d334990248ffb673b7bb2a91ec9e439c46e6cbfe8e7
MD5 11db23a8ba075778d8d0541523fb5789
BLAKE2b-256 d6a3955c5e5da84843859738b1b3c0791ed69740afaeb651271f72930183c059

See more details on using hashes here.

File details

Details for the file cldfbench-1.14.0-py3-none-any.whl.

File metadata

  • Download URL: cldfbench-1.14.0-py3-none-any.whl
  • Upload date:
  • Size: 56.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for cldfbench-1.14.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7166f6fd78f3d77ef03b7774483068e3f0aa4b3297b92205b04b4645c48614ab
MD5 2ef977b2973af5bdccfe282c22a3a7ff
BLAKE2b-256 6fd9f3d8c0fcee035a651c10b7bfea1f3429171d47d20f13464022c214a58159

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page