Skip to main content

metldata - A framework for handling metadata based on ETL, CQRS, and event sourcing.

Project description

tests Coverage Status

Metldata

metldata - A framework for handling metadata based on ETL, CQRS, and event sourcing.

Description

Metldata is a framework for handling the entire lifetime of metadata by addressing the a complex combination of challenges that makes it suitable especially for public archives for sensitive data:

Schematic reprensentation of challenges. Figure 1| Overview of the combination of challenges during metadata handling.

Immutability

It is guaranteed that data entries do not change over time making reproducibility possible without having to rely on local snapshots.

Accessibility

A stable accession is assigned to each resource. Together with the immutability property, this guarantees that you will always get the same data when querying with the same accession.

Corrections, Improvements, Extensions

Even though data is stored in an immutable way, the metldata still allows for corrections, improvements, and extensions of submitted data. This is achieved my not just storing the current state of a submission but by persisting a version history. Thereby, modifications are realized by issuing a new version of the submission without affecting the content of existing versions.

Transparency

The version history not only resolved the conflict between immutability and the need to evolve and adapt data, it also make the changes transparent to user relying on the data.

Multiple Representations

Often, the requirements regarding the structure and content of data differs depending the use case and the audience. Metldata accounts for this by proving a configurable workflow engine for transforming submitted metadata into multiple representation of that data.

GDPR Compliance

The GDPR gives data subjects the right to issue a request to delete data. Metldata complies with this demand. Thereby, only entire versions of a submission can be deleted. The associated accessions stay available so that user are informed that the associated data is not available anymore. The guarantees for immutability and stability of accessions are not violated, however, data might become unavailable.

Installation

We recommend using the provided Docker container.

A pre-build version is available at docker hub:

docker pull ghga/metldata:2.1.2

Or you can build the container yourself from the ./Dockerfile:

# Execute in the repo's root dir:
docker build -t ghga/metldata:2.1.2 .

For production-ready deployment, we recommend using Kubernetes, however, for simple use cases, you could execute the service using docker on a single server:

# The entrypoint is preconfigured:
docker run -p 8080:8080 ghga/metldata:2.1.2 --help

If you prefer not to use containers, you may install the service from source:

# Execute in the repo's root dir:
pip install .

# To run the service:
metldata --help

Configuration

Parameters

The service requires the following configuration parameters:

  • log_level (string): The minimum log level to capture. Must be one of: ["CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG", "TRACE"]. Default: "INFO".

  • service_name (string): Default: "metldata".

  • service_instance_id (string, required): A string that uniquely identifies this instance across all instances of this service. A globally unique Kafka client ID will be created by concatenating the service_name and the service_instance_id.

    Examples:

    "germany-bw-instance-001"
    
  • log_format: If set, will replace JSON formatting with the specified string format. If not set, has no effect. In addition to the standard attributes, the following can also be specified: timestamp, service, instance, level, correlation_id, and details. Default: null.

    • Any of

      • string

      • null

    Examples:

    "%(timestamp)s - %(service)s - %(level)s - %(message)s"
    
    "%(asctime)s - Severity: %(levelno)s - %(msg)s"
    
  • log_traceback (boolean): Whether to include exception tracebacks in log messages. Default: true.

  • db_connection_str (string, format: password, required): MongoDB connection string. Might include credentials. For more information see: https://naiveskill.com/mongodb-connection-string/.

    Examples:

    "mongodb://localhost:27017"
    
  • db_name (string, required): Name of the database located on the MongoDB server.

    Examples:

    "my-database"
    
  • kafka_servers (array, required): A list of connection strings to connect to Kafka bootstrap servers.

    • Items (string)

    Examples:

    [
        "localhost:9092"
    ]
    
  • kafka_security_protocol (string): Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL. Must be one of: ["PLAINTEXT", "SSL"]. Default: "PLAINTEXT".

  • kafka_ssl_cafile (string): Certificate Authority file path containing certificates used to sign broker certificates. If a CA is not specified, the default system CA will be used if found by OpenSSL. Default: "".

  • kafka_ssl_certfile (string): Optional filename of client certificate, as well as any CA certificates needed to establish the certificate's authenticity. Default: "".

  • kafka_ssl_keyfile (string): Optional filename containing the client private key. Default: "".

  • kafka_ssl_password (string, format: password): Optional password to be used for the client private key. Default: "".

  • generate_correlation_id (boolean): A flag, which, if False, will result in an error when inbound requests don't possess a correlation ID. If True, requests without a correlation ID will be assigned a newly generated ID in the correlation ID middleware function. Default: true.

    Examples:

    true
    
    false
    
  • kafka_max_message_size (integer): The largest message size that can be transmitted, in bytes. Only services that have a need to send/receive larger messages should set this. Exclusive minimum: 0. Default: 1048576.

    Examples:

    1048576
    
    16777216
    
  • primary_artifact_name (string, required): Name of the artifact from which the information for outgoing change events is derived.

    Examples:

    "embedded_public"
    
  • primary_dataset_name (string, required): Name of the resource class corresponding to the embedded_dataset slot.

    Examples:

    "EmbeddedDataset"
    
  • resource_change_event_topic (string, required): Name of the topic used for events informing other services about resource changes, i.e. deletion or insertion.

    Examples:

    "searchable_resources"
    
  • resource_deletion_event_type (string, required): Type used for events indicating the deletion of a previously existing resource.

    Examples:

    "searchable_resource_deleted"
    
  • resource_upsertion_type (string, required): Type used for events indicating the upsert of a resource.

    Examples:

    "searchable_resource_upserted"
    
  • dataset_change_event_topic (string, required): Name of the topic announcing, among other things, the list of files included in a new dataset.

    Examples:

    "metadata_datasets"
    
  • dataset_deletion_type (string, required): Type used for events announcing a new dataset overview.

    Examples:

    "dataset_deleted"
    
  • dataset_upsertion_type (string, required): Type used for events announcing a new dataset overview.

    Examples:

    "dataset_created"
    
  • host (string): IP of the host. Default: "127.0.0.1".

  • port (integer): Port to expose the server on the specified host. Default: 8080.

  • auto_reload (boolean): A development feature. Set to True to automatically reload the server upon code changes. Default: false.

  • workers (integer): Number of workers processes to run. Default: 1.

  • api_root_path (string): Root path at which the API is reachable. This is relative to the specified host and port. Default: "".

  • openapi_url (string): Path to get the openapi specification in JSON format. This is relative to the specified host and port. Default: "/openapi.json".

  • docs_url (string): Path to host the swagger documentation. This is relative to the specified host and port. Default: "/docs".

  • cors_allowed_origins: A list of origins that should be permitted to make cross-origin requests. By default, cross-origin requests are not allowed. You can use ['*'] to allow any origin. Default: null.

    • Any of

      • array

        • Items (string)
      • null

    Examples:

    [
        "https://example.org",
        "https://www.example.org"
    ]
    
  • cors_allow_credentials: Indicate that cookies should be supported for cross-origin requests. Defaults to False. Also, cors_allowed_origins cannot be set to ['*'] for credentials to be allowed. The origins must be explicitly specified. Default: null.

    • Any of

      • boolean

      • null

    Examples:

    [
        "https://example.org",
        "https://www.example.org"
    ]
    
  • cors_allowed_methods: A list of HTTP methods that should be allowed for cross-origin requests. Defaults to ['GET']. You can use ['*'] to allow all standard methods. Default: null.

    • Any of

      • array

        • Items (string)
      • null

    Examples:

    [
        "*"
    ]
    
  • cors_allowed_headers: A list of HTTP request headers that should be supported for cross-origin requests. Defaults to []. You can use ['*'] to allow all headers. The Accept, Accept-Language, Content-Language and Content-Type headers are always allowed for CORS requests. Default: null.

    • Any of

      • array

        • Items (string)
      • null

    Examples:

    []
    
  • artifact_infos (array, required): Information for artifacts to be queryable via the Artifacts REST API.

  • loader_token_hashes (array, required): Hashes of tokens used to authenticate for loading artifact.

    • Items (string)

Definitions

  • AnchorPoint (object): A model for describing an anchor point for the specified target class.

    • target_class (string, required): The name of the class to be targeted.

    • identifier_slot (string, required): The name of the slot in the target class that is used as identifier.

    • root_slot (string, required): The name of the slot in the root class used to link to the target class.

  • ArtifactInfo (object): Model to describe general information on an artifact. Please note, it does not contain actual artifact instances derived from specific metadata.

    • name (string, required): The name of the artifact.

    • description (string, required): A description of the artifact.

    • resource_classes (object, required): A dictionary of resource classes for this artifact. The keys are the names of the classes. The values are the corresponding class models. Can contain additional properties.

  • ArtifactResourceClass (object): Model to describe a resource class of an artifact.

    • name (string, required): The name of the metadata class.

    • description: A description of the metadata class. Default: null.

      • Any of

        • string

        • null

    • anchor_point: The anchor point for this metadata class. Refer to #/$defs/AnchorPoint.

    • json_schema (object, required): The JSON schema for this metadata class.

Usage:

A template YAML for configurating the service can be found at ./example-config.yaml. Please adapt it, rename it to .metldata.yaml, and place it into one of the following locations:

  • in the current working directory were you are execute the service (on unix: ./.metldata.yaml)
  • in your home directory (on unix: ~/.metldata.yaml)

The config yaml will be automatically parsed by the service.

Important: If you are using containers, the locations refer to paths within the container.

All parameters mentioned in the ./example-config.yaml could also be set using environment variables or file secrets.

For naming the environment variables, just prefix the parameter name with metldata_, e.g. for the host set an environment variable named metldata_host (you may use both upper or lower cases, however, it is standard to define all env variables in upper cases).

To using file secrets please refer to the corresponding section of the pydantic documentation.

Architecture and Design:

The framework uses a combination of ETL, CQRS, and event sourcing. Currently it is designed to mostly run as a CLI application for managing metadata on the local file system. However, later, it will be translated into a microservice based-architecture.

One Write and Multiple Read Representations

Instead of having just a single copy of metadata in a database that supports all CRUD actions needed by all the different user groups, we propose to follow the CQRS pattern by having one representation that is optimized for write operations and multiple use case-specific representations for querying metadata. Thereby, the write-specific representation is the source of truth and fuels all read-specific representations through an ETL process. In the following, the read-specific representations are also referred to as artifacts.

This setup with one write and multiple read representation has the following advantages:

  • Different subsets of the entire metadata catalog can be prepared with the needs and the permissions of different user audiences in mind.
  • It allows for independent scalability of read and write operations.
  • The metadata can be packaged in multiple different formats required and optimized for different technologies and use cases, such as indexed searching with ElasticSearch vs. REST or GraphQL queries supported by MongoDB.
  • Complex write-optimized representations, which are inconvenient for querying such as event histories, can be used as the source of truth.
  • Often used metadata aggregations and summary statistics can be precomputed.
  • Read-specific representations may contain rich annotations that are not immediately available in the write-specific representation. For instance, the write-specific representation may only contain one-way relationships between metadata elements (e.g. a sample might define a has_experiment attribute, while an experiment defines no has_sample attribute), however, a read-specific representation may contain two way relationships (e.g. a sample defines a has_experiment attribute and an experiment defines a has_sample attribute).

However, there are also disadvantages that are linked to this setup that should be considered:

  • The write and read representations are only eventually consistent.
  • Adds more complexity than a CRUD setup.

Submission-centric Store as The Source of Truth

In the write-specific representation, metadata is packaged into submissions. Each submission is fully self-contained and linking between metadata of different submissions is not possible. A submission can have one of the following statuses:

  • pending - the construction of the submission is in progress, the submitter may still change its content.
  • in-review - the submitter declared the submission as complete and is waiting for it to be reviewed, however, both the submitter and the reviewer can set this submission back to pending to enable further changes.
  • canceled - the submission was canceled before its completion, its content was deleted.
  • completed - the submission has been reviewed and approved, the content of the submission is frozen, and accessions are generated for all relevant metadata elements.
  • deprecated-prepublication - the submission was deprecated and it cannot be published anymore, however, its content is not deleted from the system.
  • emptied-prepublication - the submission was deprecated and its content was deleted from the system, however, the accessions are not deleted.
  • published - the submission was made available to other users.
  • deprecated-postpublication - the submission was deprecated and it should not be used anymore, however, its content stays available to other users.
  • hidden-postpublication - the submission was deprecated and its content is hidden from other users but it is not deleted from the system, the accessions stay available, the submission can be set to deprecated to make its content available again.
  • emptied-postpublication - the submission was deprecated and its content was deleted from the system, however, the accessions stay available.

The following status transitions are allowed:

  • pending -> in-review
  • pending -> canceled
  • in-review -> completed
  • in-review -> canceled
  • in-review -> pending
  • completed -> published
  • completed -> deprecated-prepublication
  • completed -> emptied-prepublication
  • deprecated-prepublication -> emptied-prepublication
  • published -> deprecated-postpublication
  • published -> hidden-postpublication
  • published -> emptied-postpublication
  • deprecated-postpublication -> hidden-postpublication
  • deprecated-postpublication -> emptied-postpublication
  • hidden-postpublication -> deprecated-postpublication
  • hidden-postpublication -> emptied-postpublication

A deprecated submission may or may not be succeeded by a new submission. Thereby, the new submission may reuse (a part of) the metadata from the deprecated submission. The reused metadata including the already existing accessions is copied over to the new submission so that the contents of the deprecated submission and the new submission can be handled independently, for instance, the deprecated submission being emptied.

Event Sourcing to Generate Artifacts

To implement the ETL processes that generate read-specific artifacts from the write-specific representation explained above, we propose an event-sourcing mechanism.

The creation and each status change of a given submission (and accommodating changes to the submission's content) are translated into events. The events are cumulative and idempotent so you only have to consume the latest event for a given submission in order to get the latest state of that submission and a replay of the events will lead to the same result. Thus, the event history only needs to keep the latest event for each submission as implemented in the compacted topics offered by Apache Kafka.

Moreover, since submissions are self-contained and do not depend on the content of other submissions, events of different submissions can be processed independently.

Multiple transformations (as in the ETL pattern) are applied to these so-called source events to generate altered metadata representations that are in turn published as events. These derived events can be again subjected to further transformations.

Finally, the derived events are subject to load operations (as in the ETL pattern) that aggregate the events and bring them into queryable format (an artifact) that is accessible to users through an API.

Metadata Modeling and Model Updates

Metadata requirements are modeled using LinkML. Thereby, the metadata model should take the whole metadata lifecycle into account so that it can be used to validate metadata before and after the submission as well as for all derived artifacts.

Updates to the metadata model are classified into minor and major ones. For minor updates, existing submissions are automatically migrated. The submission always stores metadata together with the used metadata model. The migration is realized through scripts that migrate metadata from an old version to a newer version. Multiple migration scripts may be combined to obtain a metadata representation that complies with the newest version. The migration can be implemented as a transformation that is applied to the source events as explained above.

Development

For setting up the development environment, we rely on the devcontainer feature of VS Code in combination with Docker Compose.

To use it, you have to have Docker Compose as well as VS Code with its "Remote - Containers" extension (ms-vscode-remote.remote-containers) installed. Then open this repository in VS Code and run the command Remote-Containers: Reopen in Container from the VS Code "Command Palette".

This will give you a full-fledged, pre-configured development environment including:

  • infrastructural dependencies of the service (databases, etc.)
  • all relevant VS Code extensions pre-installed
  • pre-configured linting and auto-formatting
  • a pre-configured debugger
  • automatic license-header insertion

Moreover, inside the devcontainer, a convenience commands dev_install is available. It installs the service with all development dependencies, installs pre-commit.

The installation is performed automatically when you build the devcontainer. However, if you update dependencies in the ./pyproject.toml or the ./requirements-dev.txt, please run it again.

License

This repository is free to use and modify according to the Apache 2.0 License.

README Generation

This README file is auto-generated, please see readme_generation.md for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metldata-2.1.2.tar.gz (92.7 kB view details)

Uploaded Source

Built Distribution

metldata-2.1.2-py3-none-any.whl (166.2 kB view details)

Uploaded Python 3

File details

Details for the file metldata-2.1.2.tar.gz.

File metadata

  • Download URL: metldata-2.1.2.tar.gz
  • Upload date:
  • Size: 92.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for metldata-2.1.2.tar.gz
Algorithm Hash digest
SHA256 d6dafb38415b86db0c2733ec4750b11888d5edb1c58781f598d6e28e05394343
MD5 d0c3ab63ce7d3d4d17e4b610fe40b225
BLAKE2b-256 d906c8e9c98fb95548e2bf787e63a3f1c2ca042db73b17e51ba87a3b30099647

See more details on using hashes here.

File details

Details for the file metldata-2.1.2-py3-none-any.whl.

File metadata

  • Download URL: metldata-2.1.2-py3-none-any.whl
  • Upload date:
  • Size: 166.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for metldata-2.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 cf531149cd24c78408442f8cd0df9c5c876211cf27ef0b7ed6a7457d80f73d77
MD5 a2bd5469f30e0f29743d896bb8961433
BLAKE2b-256 84c75742adb022f5d93762ed425696b904fdff9da9e8500a94637732932b4145

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page