DataLad Catalog
Project description
DataLad Catalog is a free and open source command line tool, with a Python API, that assists with the automatic generation of user-friendly, browser-based data catalogs from structured metadata. It is an extension to DataLad and forms part of the broader ecosystem of DataLad's distributed metadata handling and (meta)data publishing tools.
Acknowledgements
This software was developed with support from the German Federal Ministry of Education and Research (BMBF 01GQ1905), the US National Science Foundation (NSF 1912266), and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant SFB 1451 (431549029, INF project).
1. Online demo
Navigate to https://datalad.github.io/datalad-catalog/ to view a live demo of a catalog generated with DataLad Catalog.
This demo site is hosted via GitHub Pages and it builds from the gh-pages
branch of this repository.
2. How it works
DataLad Catalog can receive commands to create
a new catalog, add
and remove
metadata entries to/from an existing catalog, serve
an existing catalog locally, and more. Metadata can be provided to DataLad Catalog from any number of arbitrary metadata sources, as an aggregated set or as individual metadata items. DataLad Catalog has a dedicated schema (using the JSON Schema vocabulary) against which incoming metadata items are validated. This schema allows for standard metadata fields as one would expect for datasets of any kind (such as name
, doi
, url
, description
, license
, authors
, and more), as well as fields that support identification, versioning, dataset context and linkage, and file tree specification.
The process of generating a catalog, after metadata entry validation, involves:
- aggregation of the provided metadata into the catalog filetree, and
- generating the assets required to render the user interface in a browser.
The output is a set of structured metadata files, as well as a Vue.js-based browser interface that understands how to render this metadata in the browser. What is left for the user is to host this content on their platform of choice and to serve it for the world to see.
3. Install datalad-catalog
Step 1 - Setup and activate virtual environment
With your virtual environment manager of choice, create a virtual environment and ensure
you have a recent version of Python installed. Then activate the environment. E.g. with venv
:
python -m venv my_catalog_env
source my_catalog_env/bin/activate
Step 2 - Install the package from PyPI
Run the following from your command line:
pip install datalad-catalog
If you are a developer and would like to contribute to the code, instead clone the code base from GitHub and install with pip
local changes :
git clone https://github.com/datalad/datalad-catalog.git
cd datalad-catalog
pip install -e .
Congratulations! You have now installed datalad-catalog
.
Note on dependencies:
Because this is an extension to datalad
and builds on metadata handling functionality, the installation process also installs datalad
and datalad-metalad
as dependencies, although these do not have to be used as the only sources of metadata for a catalog.
While the catalog generation process does not expect data to be structured as DataLad datasets, it can still be very useful to do so when building a full (meta)data management pipeline from raw data to catalog publishing. For complete instructions on how to install datalad
and git-annex
, please refer to the DataLad Handbook.
Similarly, the metadata input to datalad-catalog
can come from any source as long as it conforms to the catalog schema. While the catalog does not expect metadata originating only from datalad-metalad
's extractors, this tool has advanced metadata handling capabilities that will integrate seamlessly with DataLad datasets and the catalog generation process.
4. Generating a catalog
The overall catalog generation process actually starts several steps before the involvement of datalad-catalog
. Steps include:
- curating data into datasets (a group of files in an hierarchical tree)
- adding metadata to datasets and files (the process for this and the resulting metadata formats and content vary widely depending on domain, file types, data availability, and more)
- extracting the metadata using an automated tool to output metadata items into a standardized and queryable set
- in the current context: translating the metadata into the catalog schema
- in the current context: using
datalad-catalog
to generate a catalog from the schema-conforming metadata
The first four steps in this list can follow any arbitrarily specified procedures and can use any arbitrarily specified tools to get the job done. If these steps are completed, correctly formatted data can be input, together with some configuration details, to datalad-catalog
. This tool then provides several basic commands for catalog generation and customization. For example:
datalad catalog validate -m <path/to/input/data>
# Validate input data located at <path/to/input/data> according to the catalog's schema.
datalad catalog create -c <path/to/catalog/directory> -m <path/to/input/data>
# Create a catalog at location <path/to/catalog/directory>, using input data located at <path/to/input/data>.
datalad catalog add -c <path/to/catalog/directory> -m <path/to/input/data>
# Add metadata to an existing catalog at location <path/to/catalog/directory>, using input data located at <path/to/input/data>.
datalad catalog set-super -c <path/to/catalog/directory> -i <dataset_id> -v <dataset_version>
# Set the superdataset of an existing catalog at location <path/to/catalog/directory>, where the superdataset id and version are provided as arguments. The superdataset will be the first dataset displayed when navigating to the root URL of a catalog.
datalad catalog serve -c <path/to/catalog/directory>
# Serve the content of the catalog at location <path/to/catalog/directory> via a local HTTP server.
datalad catalog workflow-new -c <path/to/catalog/directory> -d <path/to/superdataset>
# Run a workflow for recursive metadata extraction (using datalad-metalad), translating metadata to the catalog schema (using JQ bindings), and adding the translated metadata to a new catalog.
datalad catalog workflow-update -c <path/to/catalog/directory> -d <path/to/superdataset> -s <path/to/subdataset>
# Run a workflow for updating a catalog after registering a subdataset to the superdataset which the catalog represents. This workflow includes extraction (using datalad-metalad), translating metadata to the catalog schema (using JQ bindings), and adding the translated metadata to the existing catalog.
5. Tutorial
To explore the basic functionality of datalad-catalog
, please refer to these tutorials.
6. An example workflow
The DataLad ecosystem provides a complete set of free and open source tools that, together, provide full control over dataset/file access and distribution, version control, provenance tracking, metadata addition/extraction/aggregation, and catalog generation.
DataLad itself can be used for decentralised management of data as lightweight, portable and extensible representations. DataLad MetaLad can extract structured high- and low-level metadata and associate it with these datasets or with individual files. And at the end of the workflow, DataLad Catalog can turn the structured metadata into a user-friendly data browser.
Importantly, DataLad Catalog can operate independently as well. Since it provides its own schema in a standard vocabulary, any metadata that conforms to this schema can be submitted to the tool in order to generate a catalog. Metadata items do not necessarily have to be derived from DataLad datasets, and the metadata extraction does not have to be conducted via DataLad MetaLad.
Even so, the provided set of tools can be particularly powerful when used together in a distributed (meta)data management pipeline.
7. Contributing
Feedback / comments
Please create a new issue if you have any feedback, comments, or requests.
Developer requirements
If you'd like to contribute as a developer, you need to install a number of extra dependencies:
cd datalad-catalog
pip install -r requirements-devel.txt
This installs sphinx
and related packages for documentation building, coverage
for code coverage, and pytest
for testing.
Contribution process
To make a contribution to the code or documentation, please:
- create an issue describing the bug/feature
- fork the project repository,
- create a branch from
main
, - check that tests succeed: from the project root directory, run
pytest
- commit your changes,
- push to your fork
- create a pull request with a clear description of the changes
Contributors โจ
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file datalad_catalog-1.1.1.tar.gz
.
File metadata
- Download URL: datalad_catalog-1.1.1.tar.gz
- Upload date:
- Size: 6.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7d2ecf1dc02e5a42bb923ba5dd01684e0a6914b67342a5b7da2632fd9a808546 |
|
MD5 | 65d48ff9d44d2775c08497163e1d74ab |
|
BLAKE2b-256 | ccc29bec72f8cd35edd1b5f79a67da5952a8489f94ae774680f11db0a4943215 |
File details
Details for the file datalad_catalog-1.1.1-py3-none-any.whl
.
File metadata
- Download URL: datalad_catalog-1.1.1-py3-none-any.whl
- Upload date:
- Size: 751.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 20de400a88c5fa0fac27ad22c8e5d678dec9fa1967407bfa0f1a3a68b8aa69f5 |
|
MD5 | 597b945d241f58885e29bc4c969e6ed3 |
|
BLAKE2b-256 | bca8f58de403829f5f92c66fc0a66a789e692602e93a77127352bd73ff2839e5 |