Skip to main content

Crawls a Databricks workspace, extracts structured metadata from catalogs, schemas, and tables, and serializes it into the CLOE metadata format for downstream processing and integration.

Project description

Databricks Crawler

Copier python uv Ruff Checked with mypy Code style: black pre-commit

Owner: Lena Cabrera

Crawls a Databricks workspace, extracts structured metadata from catalogs, schemas, and tables, and serializes it into the CLOE metadata format for downstream processing and integration.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cloe_dbx_crawler-0.2.0-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file cloe_dbx_crawler-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for cloe_dbx_crawler-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0c2b707e3f2a598e07f0211a26c8343b001ff679139dd5b36ee3af55748ce943
MD5 513fe9639172be9f4162e5f6c14b5345
BLAKE2b-256 26e0b30f112ceeaaa73404b4f64efde39b8f1928f6598cb8a91a997b28f702e5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page