Skip to main content

DataLad Catalog

Project description

Documentation Status GitHub release PyPI version fury.io docs Build status codecov.io crippled-filesystems pages-build-deployment push_catalog_to_gh_pages DOI

All Contributors

DataCat logo

DataLad Catalog is a free and open source command line tool, with a Python API, that assists with the automatic generation of user-friendly, browser-based data catalogs from structured metadata. It is an extension to DataLad and forms part of the broader ecosystem of DataLad's distributed metadata handling and (meta)data publishing tools.

Acknowledgements

This software was developed with support from the German Federal Ministry of Education and Research (BMBF 01GQ1905), the US National Science Foundation (NSF 1912266), and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant SFB 1451 (431549029, INF project).

1. Online demo

Navigate to https://datalad.github.io/datalad-catalog/ to view a live demo of a catalog generated with DataLad Catalog.

This demo site is hosted via GitHub Pages and it builds from the gh-pages branch of this repository.

2. How it works

DataLad Catalog can receive commands to create a new catalog, add and remove metadata entries to/from an existing catalog, serve an existing catalog locally, and more. Metadata can be provided to DataLad Catalog from any number of arbitrary metadata sources, as an aggregated set or as individual metadata items. DataLad Catalog has a dedicated schema (using the JSON Schema vocabulary) against which incoming metadata items are validated. This schema allows for standard metadata fields as one would expect for datasets of any kind (such as name, doi, url, description, license, authors, and more), as well as fields that support identification, versioning, dataset context and linkage, and file tree specification.

The process of generating a catalog, after metadata entry validation, involves:

  1. aggregation of the provided metadata into the catalog filetree, and
  2. generating the assets required to render the user interface in a browser.

The output is a set of structured metadata files, as well as a Vue.js-based browser interface that understands how to render this metadata in the browser. What is left for the user is to host this content on their platform of choice and to serve it for the world to see.


3. Install datalad-catalog

Step 1 - Setup and activate virtual environment

With your virtual environment manager of choice, create a virtual environment and ensure you have a recent version of Python installed. Then activate the environment. E.g. with venv:

python -m venv my_catalog_env
source my_catalog_env/bin/activate

Step 2 - Install the package from PyPI

Run the following from your command line:

pip install datalad-catalog

If you are a developer and would like to contribute to the code, instead clone the code base from GitHub and install with pip local changes :

git clone https://github.com/datalad/datalad-catalog.git
cd datalad-catalog
pip install -e .

Congratulations! You have now installed datalad-catalog.

Note on dependencies:

Because this is an extension to datalad and builds on metadata handling functionality, the installation process also installs datalad and datalad-metalad as dependencies, although these do not have to be used as the only sources of metadata for a catalog.

While the catalog generation process does not expect data to be structured as DataLad datasets, it can still be very useful to do so when building a full (meta)data management pipeline from raw data to catalog publishing. For complete instructions on how to install datalad and git-annex, please refer to the DataLad Handbook.

Similarly, the metadata input to datalad-catalog can come from any source as long as it conforms to the catalog schema. While the catalog does not expect metadata originating only from datalad-metalad's extractors, this tool has advanced metadata handling capabilities that will integrate seamlessly with DataLad datasets and the catalog generation process.

4. Generating a catalog

The overall catalog generation process actually starts several steps before the involvement of datalad-catalog. Steps include:

  1. curating data into datasets (a group of files in an hierarchical tree)
  2. adding metadata to datasets and files (the process for this and the resulting metadata formats and content vary widely depending on domain, file types, data availability, and more)
  3. extracting the metadata using an automated tool to output metadata items into a standardized and queryable set
  4. in the current context: translating the metadata into the catalog schema
  5. in the current context: using datalad-catalog to generate a catalog from the schema-conforming metadata

The first four steps in this list can follow any arbitrarily specified procedures and can use any arbitrarily specified tools to get the job done. If these steps are completed, correctly formatted data can be input, together with some configuration details, to datalad-catalog. This tool then provides several basic commands for catalog generation and customization. For example:

datalad catalog validate -m <path/to/input/data>
# Validate input data located at <path/to/input/data> according to the catalog's schema.

datalad catalog create -c <path/to/catalog/directory> -m <path/to/input/data>
# Create a catalog at location <path/to/catalog/directory>, using input data located at <path/to/input/data>.

datalad catalog add -c <path/to/catalog/directory> -m <path/to/input/data>
# Add metadata to an existing catalog at location <path/to/catalog/directory>, using input data located at <path/to/input/data>.

datalad catalog set-super -c <path/to/catalog/directory> -i <dataset_id> -v <dataset_version>
# Set the superdataset of an existing catalog at location <path/to/catalog/directory>, where the superdataset id and version are provided as arguments. The superdataset will be the first dataset displayed when navigating to the root URL of a catalog.

datalad catalog serve -c <path/to/catalog/directory>
# Serve the content of the catalog at location <path/to/catalog/directory> via a local HTTP server.

datalad catalog workflow-new -c <path/to/catalog/directory> -d <path/to/superdataset>
# Run a workflow for recursive metadata extraction (using datalad-metalad), translating metadata to the catalog schema (using JQ bindings), and adding the translated metadata to a new catalog.

datalad catalog workflow-update -c <path/to/catalog/directory> -d <path/to/superdataset> -s <path/to/subdataset>
# Run a workflow for updating a catalog after registering a subdataset to the superdataset which the catalog represents. This workflow includes extraction (using datalad-metalad), translating metadata to the catalog schema (using JQ bindings), and adding the translated metadata to the existing catalog.

5. Tutorial

To explore the basic functionality of datalad-catalog, please refer to these tutorials.

6. An example workflow

The DataLad ecosystem provides a complete set of free and open source tools that, together, provide full control over dataset/file access and distribution, version control, provenance tracking, metadata addition/extraction/aggregation, and catalog generation.

DataLad itself can be used for decentralised management of data as lightweight, portable and extensible representations. DataLad MetaLad can extract structured high- and low-level metadata and associate it with these datasets or with individual files. And at the end of the workflow, DataLad Catalog can turn the structured metadata into a user-friendly data browser.

Importantly, DataLad Catalog can operate independently as well. Since it provides its own schema in a standard vocabulary, any metadata that conforms to this schema can be submitted to the tool in order to generate a catalog. Metadata items do not necessarily have to be derived from DataLad datasets, and the metadata extraction does not have to be conducted via DataLad MetaLad.

Even so, the provided set of tools can be particularly powerful when used together in a distributed (meta)data management pipeline.


7. Contributing

Feedback / comments

Please create a new issue if you have any feedback, comments, or requests.

Developer requirements

If you'd like to contribute as a developer, you need to install a number of extra dependencies:

cd datalad-catalog
pip install -r requirements-devel.txt

This installs sphinx and related packages for documentation building, coverage for code coverage, and pytest for testing.

Contribution process

To make a contribution to the code or documentation, please:

  • create an issue describing the bug/feature
  • fork the project repository,
  • create a branch from main,
  • check that tests succeed: from the project root directory, run pytest
  • commit your changes,
  • push to your fork
  • create a pull request with a clear description of the changes

Contributors โœจ

Thanks goes to these wonderful people (emoji key):


Stephan Heunis

๐Ÿ› ๐Ÿ’ป ๐Ÿ–‹ ๐Ÿ”ฃ ๐Ÿ“– ๐ŸŽจ ๐Ÿค” ๐Ÿš‡ ๐Ÿšง โš ๏ธ ๐Ÿ’ฌ

Alex Waite

๐Ÿ”ฃ ๐ŸŽจ ๐Ÿค” ๐Ÿ’ป ๐Ÿ““ ๐Ÿ›

Julian Kosciessa

๐Ÿ““ โœ… ๐Ÿค” ๐Ÿ“– ๐Ÿ›

Adina Wagner

๐Ÿ“– ๐Ÿ› ๐Ÿ’ป ๐ŸŽจ ๐Ÿค” ๐Ÿš‡ ๐Ÿšง โš ๏ธ ๐Ÿ’ฌ

Yaroslav Halchenko

๐Ÿ““ ๐Ÿค”

Michael Hanke

๐Ÿค”

Benjamin Poldrack

๐Ÿค”

Christian Mรถnch

๐Ÿค” ๐Ÿ› ๐Ÿ’ป ๐Ÿ“– โš ๏ธ ๐Ÿ’ฌ ๐Ÿ‘€ ๐ŸŽจ

Michaล‚ Szczepanik

๐Ÿ› ๐Ÿ’ป ๐Ÿ“– โš ๏ธ

Laura Waite

๐Ÿค” ๐Ÿ“–

Leonardo Muller-Rodriguez

๐Ÿ““ ๐Ÿ‘€

This project follows the all-contributors specification. Contributions of any kind welcome!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datalad_catalog-1.0.1.tar.gz (6.6 MB view hashes)

Uploaded Source

Built Distribution

datalad_catalog-1.0.1-py3-none-any.whl (738.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page