No project description provided
A Python Library for the Processing of Cross-Linguistic Data.
By Johann-Mattis List and Robert Forkel.
While pycldf provides a basic Python API to access cross-linguistic data
encoded in CLDF datasets,
cltoolkit goes one step further, turning the data into full-fledged Python objects rather than
shallow proxies for rows in a CSV file. Of course, as with
pycldf's ORM package, there's a trade-off
involved, gaining convenient access and a more pythonic API at the expense of performance (in particular
memory footprint but also data load time) and write-access. But most of today's CLDF datasets (or aggregations
of these) will be processable with
cltoolkit on reasonable hardware in minutes - rather than hours.
The main idea behind
cltoolkit is making (aggregated) CLDF data easily amenable for computation
of linguistic features in a general sense (e.g. typological features, etc.). This is done by
- providing the data for processing code as Python objects,
- providing a framework that makes feature computation
as simple as writing a Python function acting on a
In general, aggregated CLDF Wordlists provide limited (automated) comparability across datasets (e.g. one could compare the number of words per language in each dataset). A lot more can be done when datasets use CLDF reference properties to link to reference catalogs, i.e.
- link language varieties to Glottolog languoids,
- link senses to Concepticon concept sets,
- link sound segments to CLTS sounds.
cltoolkit objects exploit this extended comparability by distinguishing "senses" and "concepts" and "graphemes"
and "sounds" and providing convenient access to comparable subsets of objects in an aggregation
See example.md for a walk-through of the typical workflow with
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for cltoolkit-0.1.1-py2.py3-none-any.whl