Skip to main content

A CLI to work with DataHub metadata

Project description

Introduction to Metadata Ingestion

Find Integration Source

Integration Options

DataHub supports both push-based and pull-based metadata integration.

Push-based integrations allow you to emit metadata directly from your data systems when metadata changes, while pull-based integrations allow you to "crawl" or "ingest" metadata from the data systems by connecting to them and extracting metadata in a batch or incremental-batch manner. Supporting both mechanisms means that you can integrate with all your systems in the most flexible way possible.

Examples of push-based integrations include Airflow, Spark, Great Expectations and Protobuf Schemas. This allows you to get low-latency metadata integration from the "active" agents in your data ecosystem. Examples of pull-based integrations include BigQuery, Snowflake, Looker, Tableau and many others.

This document describes the pull-based metadata ingestion system that is built into DataHub for easy integration with a wide variety of sources in your data stack.

Getting Started

Prerequisites

Before running any metadata ingestion job, you should make sure that DataHub backend services are all running. You can either run ingestion via the UI or via the CLI. You can reference the CLI usage guide given there as you go through this page.

Core Concepts

Sources

Please see our Integrations page to browse our ingestion sources and filter on their features.

Data systems that we are extracting metadata from are referred to as Sources. The Sources tab on the left in the sidebar shows you all the sources that are available for you to ingest metadata from. For example, we have sources for BigQuery, Looker, Tableau and many others.

Metadata Ingestion Source Status

We apply a Support Status to each Metadata Source to help you understand the integration reliability at a glance.

Certified: Certified Sources are well-tested & widely-adopted by the DataHub Community. We expect the integration to be stable with few user-facing issues.

Incubating: Incubating Sources are ready for DataHub Community adoption but have not been tested for a wide variety of edge-cases. We eagerly solicit feedback from the Community to streghten the connector; minor version changes may arise in future releases.

Testing: Testing Sources are available for experiementation by DataHub Community members, but may change without notice.

Sinks

Sinks are destinations for metadata. When configuring ingestion for DataHub, you're likely to be sending the metadata to DataHub over either the REST (datahub-sink) or the Kafka (datahub-kafka) sink. In some cases, the File sink is also helpful to store a persistent offline copy of the metadata during debugging.

The default sink that most of the ingestion systems and guides assume is the datahub-rest sink, but you should be able to adapt all of them for the other sinks as well!

Recipes

A recipe is the main configuration file that puts it all together. It tells our ingestion scripts where to pull data from (source) and where to put it (sink).

:::tip Name your recipe with .dhub.yaml extension like myrecipe.dhub.yaml to use vscode or intellij as a recipe editor with autocomplete and syntax validation.

Make sure yaml plugin is installed for your editor:

:::

Since acryl-datahub version >=0.8.33.2, the default sink is assumed to be a DataHub REST endpoint:

  • Hosted at "http://localhost:8080" or the environment variable ${DATAHUB_GMS_URL} if present
  • With an empty auth token or the environment variable ${DATAHUB_GMS_TOKEN} if present.

Here's a simple recipe that pulls metadata from MSSQL (source) and puts it into the default sink (datahub rest).

# The simplest recipe that pulls metadata from MSSQL and puts it into DataHub
# using the Rest API.
source:
  type: mssql
  config:
    username: sa
    password: ${MSSQL_PASSWORD}
    database: DemoData
# sink section omitted as we want to use the default datahub-rest sink

Running this recipe is as simple as:

datahub ingest -c recipe.dhub.yaml

or if you want to override the default endpoints, you can provide the environment variables as part of the command like below:

DATAHUB_GMS_URL="https://my-datahub-server:8080" DATAHUB_GMS_TOKEN="my-datahub-token" datahub ingest -c recipe.dhub.yaml

A number of recipes are included in the examples/recipes directory. For full info and context on each source and sink, see the pages described in the table of plugins.

Note that one recipe file can only have 1 source and 1 sink. If you want multiple sources then you will need multiple recipe files.

Handling sensitive information in recipes

We automatically expand environment variables in the config (e.g. ${MSSQL_PASSWORD}), similar to variable substitution in GNU bash or in docker-compose files. For details, see https://docs.docker.com/compose/compose-file/compose-file-v2/#variable-substitution. This environment variable substitution should be used to mask sensitive information in recipe files. As long as you can get env variables securely to the ingestion process there would not be any need to store sensitive information in recipes.

Basic Usage of CLI for ingestion

pip install 'acryl-datahub[datahub-rest]'  # install the required plugin
datahub ingest -c ./examples/recipes/mssql_to_datahub.dhub.yml

The --dry-run option of the ingest command performs all of the ingestion steps, except writing to the sink. This is useful to validate that the ingestion recipe is producing the desired metadata events before ingesting them into datahub.

# Dry run
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml --dry-run
# Short-form
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml -n

The --preview option of the ingest command performs all of the ingestion steps, but limits the processing to only the first 10 workunits produced by the source. This option helps with quick end-to-end smoke testing of the ingestion recipe.

# Preview
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml --preview
# Preview with dry-run
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml -n --preview

By default --preview creates 10 workunits. But if you wish to try producing more workunits you can use another option --preview-workunits

# Preview 20 workunits without sending anything to sink
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yml -n --preview --preview-workunits=20

Reporting

By default, the cli sends an ingestion report to DataHub, which allows you to see the result of all cli-based ingestion in the UI. This can be turned off with the --no-default-report flag.

# Running ingestion with reporting to DataHub turned off
datahub ingest -c ./examples/recipes/example_to_datahub_rest.dhub.yaml --no-default-report

The reports include the recipe that was used for ingestion. This can be turned off by adding an additional section to the ingestion recipe.

source:
  # source configs

sink:
  # sink configs

# Add configuration for the datahub reporter
reporting:
  - type: datahub
    config:
      report_recipe: false

Deploying and scheduling ingestion to the UI

The deploy subcommand of the ingest command tree allows users to upload their recipes and schedule them in the server.

datahub ingest deploy -n <user friendly name for ingestion> -c recipe.yaml

By default, no schedule is done unless explicitly configured with the --schedule parameter. Timezones are inferred from the system time, can be overriden with --time-zone flag.

datahub ingest deploy -n test --schedule "0 * * * *" --time-zone "Europe/London" -c recipe.yaml

Transformations

If you'd like to modify data before it reaches the ingestion sinks – for instance, adding additional owners or tags – you can use a transformer to write your own module and integrate it with DataHub. Transformers require extending the recipe with a new section to describe the transformers that you want to run.

For example, a pipeline that ingests metadata from MSSQL and applies a default "important" tag to all datasets is described below:

# A recipe to ingest metadata from MSSQL and apply default tags to all tables
source:
  type: mssql
  config:
    username: sa
    password: ${MSSQL_PASSWORD}
    database: DemoData

transformers: # an array of transformers applied sequentially
  - type: simple_add_dataset_tags
    config:
      tag_urns:
        - "urn:li:tag:Important"
# default sink, no config needed

Check out the transformers guide to learn more about how you can create really flexible pipelines for processing metadata using Transformers!

Using as a library (SDK)

In some cases, you might want to construct Metadata events directly and use programmatic ways to emit that metadata to DataHub. In this case, take a look at the Python emitter and the Java emitter libraries which can be called from your own code.

Programmatic Pipeline

In some cases, you might want to configure and run a pipeline entirely from within your custom Python script. Here is an example of how to do it.

Developing

See the guides on developing, adding a source and using transformers.

Compatibility

DataHub server uses a 3 digit versioning scheme, while the CLI uses a 4 digit scheme. For example, if you're using DataHub server version 0.10.0, you should use CLI version 0.10.0.x, where x is a patch version. We do this because we do CLI releases at a much higher frequency than server releases, usually every few days vs twice a month.

For ingestion sources, any breaking changes will be highlighted in the release notes. When fields are deprecated or otherwise changed, we will try to maintain backwards compatibility for two server releases, which is about 4-6 weeks. The CLI will also print warnings whenever deprecated options are used.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cdpdev-datahub-0.10.5a0.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

cdpdev_datahub-0.10.5a0-py3-none-any.whl (1.2 MB view details)

Uploaded Python 3

File details

Details for the file cdpdev-datahub-0.10.5a0.tar.gz.

File metadata

  • Download URL: cdpdev-datahub-0.10.5a0.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.17

File hashes

Hashes for cdpdev-datahub-0.10.5a0.tar.gz
Algorithm Hash digest
SHA256 706394cdbb4ce8bf0fdbe0407b70a5de928da3b6ad28d1da01fe183e9343c782
MD5 ac09e8559cd366a053111c7418059c55
BLAKE2b-256 d3492c52f8061960aa4c5eb800c1f569d489e6999f0ae82a491ec4da9e6dbedf

See more details on using hashes here.

File details

Details for the file cdpdev_datahub-0.10.5a0-py3-none-any.whl.

File metadata

File hashes

Hashes for cdpdev_datahub-0.10.5a0-py3-none-any.whl
Algorithm Hash digest
SHA256 66ea09111936b88c99ce1310b8def1ceb89a9fc364cf28d9718b482084bf5dae
MD5 7612fbe885b3322cced84231588e89eb
BLAKE2b-256 37c3ff3b4d5e7b234f12ccbee3e3673cf8245465ac8ad50a23f09c77d8c26489

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page