Skip to main content

DVC remote plugin for Databricks Unity Catalog Volumes

Project description

dvc-databricks

A DVC remote storage plugin that enables data versioning on Databricks Unity Catalog Volumes.

Store large data files on Databricks Volumes (backed by S3 or ADLS), keep only lightweight .dvc pointer files in your git repository, and use standard DVC commands — no custom code required.

dvc push   # uploads data to Databricks Volume via Databricks SDK
dvc pull   # downloads data from Databricks Volume

Why this plugin?

Databricks Unity Catalog Volumes cannot be accessed like a plain S3 bucket — all I/O should go through the Databricks Files API. This plugin bridges DVC and the Databricks SDK so you can version and share datasets stored on Volumes without ever leaving the standard DVC workflow.


Requirements

  • Python >= 3.11
  • DVC >= 3.0
  • Databricks CLI configured with a profile in ~/.databrickscfg
  • Access to a Databricks Unity Catalog Volume

Installation

pip install dvc-databricks

Once installed, the dbvol:// remote protocol is automatically available to DVC in every process — no imports or additional configuration needed.


Setup

1. Initialize DVC in your repository (if not already done)

dvc init
git add .dvc
git commit -m "initialize DVC"

2. Add the Databricks Volume as a DVC remote

dvc remote add -d myremote \
    dbvol:///Volumes/<catalog>/<schema>/<volume>/<path>

Example:

dvc remote add -d myremote \
    dbvol:///Volumes/ml_catalog/datasets/storage/dvc_cache

3. Set your Databricks profile

export DATABRICKS_CONFIG_PROFILE=<your-profile-name>

Note: DVC remotes do not support arbitrary config keys, so the Databricks profile must be provided via this environment variable — it cannot be stored in .dvc/config. Add the export to your ~/.zshrc or ~/.bashrc to make it permanent.


Usage

Track a data file

dvc add data/dataset.csv

This creates data/dataset.csv.dvc — a small pointer file that goes into git. The actual data file must be listed in .gitignore.

Push data to the Volume

dvc push

Uploads the file to your Databricks Volume via the Databricks SDK.

Commit the pointer to git

git add data/dataset.csv.dvc .gitignore
git commit -m "track dataset v1 with DVC"
git push

Pull data in another environment

git clone <your-repo>
pip install dvc-databricks
export DATABRICKS_CONFIG_PROFILE=<your-profile-name>
dvc pull

CLI — dvc-databricks add

The dvc-databricks add command recursively finds files under a directory and tracks each one with DVC, creating one .dvc pointer file per file. The full folder structure is preserved in git, which allows granular pulls by file or subfolder — unlike DVC's built-in dvc add <dir>, which creates a single .dvc file for the whole directory.

Syntax

dvc-databricks add <path> [--include EXT ...] [--exclude EXT ...]

Arguments

Argument Description
path Root directory to scan recursively (required).
--include EXT ... Whitelist — only track files with these extensions. Accepts multiple values.
--exclude EXT ... Blacklist — always skip files with these extensions. Accepts multiple values. Takes precedence over --include.

Extensions can be written with or without a leading dot (.csv and csv are equivalent) and are matched case-insensitively.

Filter logic

  • --include is a whitelist: only files whose extension is in the list are tracked.
  • --exclude is a blacklist: files whose extension is in the list are always skipped.
  • When both are provided, --exclude takes precedence over --include.
  • When neither is provided, all files are tracked.

Examples

# Track only CSV and JSON files
dvc-databricks add /path/to/dataset --include .csv .json

# Track all files except macOS artifacts and temp files
dvc-databricks add /path/to/dataset --exclude .DS_Store .tmp .log

# Only CSVs, but skip .DS_Store even if --include .csv is set
dvc-databricks add /path/to/dataset --include .csv --exclude .DS_Store

# Track all files with no filters
dvc-databricks add /path/to/dataset

After running

git add .
git commit -m "track dataset file by file"
dvc push
  • One .dvc pointer file is created next to each tracked data file.
  • Each directory containing tracked files gets a .gitignore that excludes the raw data files from git.
  • dvc push uploads all tracked files to the configured Databricks Volume.

How it works

Your git repo                   Databricks Volume (S3 / ADLS)
──────────────────              ───────────────────────────────────
data/dataset.csv.dvc  ──────►  /Volumes/catalog/schema/vol/
.dvc/config                     └── files/md5/
                                    ├── ab/cdef1234...   ← actual data
                                    └── 9f/123abc...     ← actual data

dvc|dvc-databricks add hashes the file and stores it in the local DVC cache (.dvc/cache). A .dvc pointer file containing the MD5 hash is created next to your data file.

dvc push uploads from the local cache to the Volume using the Databricks Files API (WorkspaceClient.files.upload). Files are stored content-addressed: <volume_path>/files/md5/<hash[:2]>/<hash[2:]>.

dvc pull downloads from the Volume into the local cache, then restores the file to its original path.

Only .dvc pointer files are ever committed to git — the data stays on the Volume.


Architecture

The plugin follows the same pattern as official DVC plugins:

Class Base Role
DatabricksVolumesFileSystem dvc_objects.FileSystem DVC-facing layer: config, checksum strategy, dependency check
_DatabricksVolumesFS fsspec.AbstractFileSystem I/O layer: all Databricks SDK calls

A .pth file installed into site-packages ensures the plugin is loaded at Python startup in every process (including DVC CLI subprocesses), without requiring any manual imports.


Environment variables

Variable Description
DATABRICKS_CONFIG_PROFILE Databricks CLI profile name from ~/.databrickscfg. Falls back to the default profile if not set.

License

MIT © Óscar Reyes

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dvc_databricks-1.2.4.tar.gz (25.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dvc_databricks-1.2.4-py3-none-any.whl (14.3 kB view details)

Uploaded Python 3

File details

Details for the file dvc_databricks-1.2.4.tar.gz.

File metadata

  • Download URL: dvc_databricks-1.2.4.tar.gz
  • Upload date:
  • Size: 25.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for dvc_databricks-1.2.4.tar.gz
Algorithm Hash digest
SHA256 744424b966de131ca832f41e663c82315da2aaf30dc112ad3903a913e3199b6c
MD5 0bce00a48577e1f14612e085a7129462
BLAKE2b-256 971430246b8751723b9d9dfb68f0f132642c07059de1a3ed19af4611478cbeec

See more details on using hashes here.

File details

Details for the file dvc_databricks-1.2.4-py3-none-any.whl.

File metadata

  • Download URL: dvc_databricks-1.2.4-py3-none-any.whl
  • Upload date:
  • Size: 14.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for dvc_databricks-1.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 e835d5ed63d2af6dcaa053dd28d48e05a1b2d65289876007b14e951c30d1158b
MD5 45ce9d3fda2e16cf1410dd1559389ed5
BLAKE2b-256 2dec9c3a53a3e737a7fb3f69302a71d04250b48e388aaa0e81b85d1561663f87

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page