Python library implementing the lexibank workbench
Project description
pylexibank
pylexibank
is a python package providing functionality to curate and aggregate
Lexibank datasets.
Compatibility
At the core of the curation functionality provided by pylexibank
lies integration
with the metadata catalogs Glottolog,
Concepticon and CLTS.
Not all releases of these catalogs are compatibly with all versions of
pylexibank
.
pylexibank | Glottolog | Concepticon | CLTS |
---|---|---|---|
2.x | >=4.x | >=2.x | 1.x |
3.x | >=4.x | >=2.x | >=2.x |
Install
Since pylexibank
has quite a few dependencies, installing it will result in installing
many other python packages along with it. To avoid any side effects for your default
python installation, we recommend installation in a
virtual environment.
Now you may install pylexibank
via pip or in development mode following the instructions
in CONTRIBUTING.md.
Installing pylexibank
will also install cldfbench
, which in turn installs a cli command cldfbench
. This command is used
to run pylexibank
functionality from the command line as subcommands.
cldfbench
is also used to manage reference catalogs, in particular Glottolog,
Concepticon and CLTS. Thus, after installing pylexibank
you should run
cldfbench catconfig
to make sure the catalog data is locally available and pylexibank
knows about it.
Usage
pylexibank
can be used in two ways:
- The command line interface provides mainly access to the functionality for the
lexibank
curation workflow. - The
pylexibank
package can also be used like any other python package in your own python code to access lexibank data in a programmatic (and consistent) way.
The cmd_makecldf
method
The main goal of pylexibank
is creating high-quality CLDF Wordlists. This
happens in the custom cmd_makecldf
method of a Lexibank dataset. To make this task
easier, pylexibank
provides
- access to Glottolog and Concepticon data:
args.glottolog.api
points to an instance ofCachingGlottologAPI
(a subclass ofpyglottolog.Glottolog
)args.concepticon.api
points to an instance ofCachingConcepticonAPI
(a subclass ofpyconcepticon.Concepticon
)
- fine-grained control over form manipulation via a
Dataset.form_spec
, an instance ofpylexibank.FormSpec
which can be customized per dataset.FormSpec
is meant to capture the rules that have been used when compiling the source data - for cases where the source data violates these rules, wholesale replacement by listing a lexeme inetc/lexemes.csv
is recommended. - support for additional information on lexemes, cognates, concepts and languages via
subclassing the defaults in
pylexibank.models
- easy access to configuration data in a dataset's
etc_dir
- support for segmentation using the
segments
package with orthography profile(s):- If an orthography profile is available as
etc/orthography.tsv
, asegments.Tokenizer
instance, initialized with this profile, will be available asDataset.tokenizer
and automatically used byLexibankWriter.add_form
. - If a directory
etc/orthography/
exists, all*.tsv
files in it will be considered orthography profiles, and adict
mapping filename stem to tokenizer will be available. Tokenizer selection can be controlled in two ways:- Passing a keyword
profile=FILENAME_STEM
inDataset.tokenizer()
calls. - Provide orthography profiles for each language and let
Dataset.tokenizer
chose the tokenizer byitem['Language_ID']
.
- Passing a keyword
- If an orthography profile is available as
Programmatic access to Lexibank datasets
While some level of support for reading and writing any CLDF dataset is already provided by the pycldf
package, pylexibank
(building on cldfbench
) adds another layer of abstraction which supports
- treating Lexibank datasets as Python packages (and managing them via
pip
), - a multi-step curation workflow
- aggregating collections of Lexibank datasets into a single SQLite database for efficient analysis.
Installable and pylexibank
enabled datasets
Turning a Lexibank dataset into a (pip
installable) Python package is as simple as writing a setup script setup.py
.
But to make the dataset available for curation via pylexibank
, the dataset must provide
- a python module
- containing a class derived from
pylexibank.Dataset
, which specifiesDataset.dir
: A directory relative to which the the curation directories are located.Dataset.id
: An identifier of the dataset.
- which is advertised as
lexibank.dataset
entry point insetup.py
. E.g.entry_points={ 'lexibank.dataset': [ 'sohartmannchin=lexibank_sohartmannchin:Dataset', ] },
Turning datasets into pylexibank
enabled python packages has multiple advantages:
- Datasets can be installed from various sources, e.g. GitHub repositories.
- Requirements files can be used "pin" particular versions of datasets for installation.
- Upon installation datasets can be discovered programmatically.
- Virtual environments can be used to manage projects which require different versions of the same dataset.
Conventions
- Dataset identifier should be lowercase and either:
- the database name, if this name is established and well-known (e.g. "abvd", "asjp" etc),
- <author><languagegroup> (e.g. "grollemundbantu" etc)
- Datasets that require preprocessing with external programs (e.g. antiword, libreoffice) should store intermediate/artifacts in ./raw/ directory, and the
cmd_install
code should install from that rather than requiring an external dependency. - Declaring a dataset's dependence on
pylexibank
:- specify minimum versions in
setup.py
, i.e. requirepylexibank>=1.x.y
- specify exact versions in dataset's
cldf-metadata.json
usingprov:createdBy
property (pylexibank
will take care of this when the CLDF is created vialexibank makecldf
).
- specify minimum versions in
Datasets on GitHub
GitHub provides a very good platform for collaborative curation of textual data such as Lexibank datasets.
Dataset curators are encouraged to make use of features in addition to the just version control, such as
- releases
- README.md, LICENSE, CONTRIBUTORS.md
Note that for datasets curated with pylexibank
, summary statistics will be written to README.md
as part of the makecldf
command.
In addition to the support for collaboratively editing and versioning data, GitHub supports tying into additional services via webhooks. In particular, two of these services are relevant for Lexibank datasets:
- Continuous integration, e.g. via Travis-CI
- Archiving with Zenodo. Notes:
- When datasets are curated on GitHub and hooked up to ZENODO to trigger automatic deposits of releases, the release tag must start with a letter (otherwise the deposit will fail).
- Additional tags can be added to add context - e.g. when a release is triggered by a specific use case (for example the CLICS 2.0 release). This can be done using
git
as follows:git checkout tags/vX.Y.Z git tag -a "clics-2.0" git push origin --tags
Attribution
There are multiple levels of contributions to a Lexibank dataset:
- Typically, Lexibank datasets are derived from published data (be it supplemental material of a paper or public databases). Attribution to this source dataset is given by specifying its full citation in the dataset's metadata and by adding the source title to the release title of a lexibank dataset.
- Often the source dataset is also an aggregation of data from other sources. If possible, these sources (and the corresponding references) are kept in the Lexibank dataset's CLDF; otherwise we refer to the source dataset for a description of its sources.
- Deriving a Lexibank dataset from a source dataset using the
pylexibank
curation workflow involves adding code, mapping to reference catalogs and to some extent also linguistic judgements. These contributions are listed in a dataset'sCONTRIBUTORS.md
and translate to the list of authors of released versions of the lexibank dataset.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pylexibank-3.5.0.tar.gz
.
File metadata
- Download URL: pylexibank-3.5.0.tar.gz
- Upload date:
- Size: 72.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 90cf12a48cda90f95d1709645d94cd67abdac4328ce2802a0d849845509abae7 |
|
MD5 | f35841e13d75b01876652701db29403c |
|
BLAKE2b-256 | 852892c3893aa084ee2b4279705b0c8664302f691cdec9f97dbd799b26b8341d |
File details
Details for the file pylexibank-3.5.0-py2.py3-none-any.whl
.
File metadata
- Download URL: pylexibank-3.5.0-py2.py3-none-any.whl
- Upload date:
- Size: 73.6 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6e83481a2ee02129d001c7a5fd6afb1b9da2313805388c0e482e28944e573710 |
|
MD5 | 87db982070d10fe722547cf62567454c |
|
BLAKE2b-256 | efbe261584cd130b52d6a1798740296f608d08d4289500036b75cb7baa9a34b0 |