Skip to main content

DVC remote plugin for Databricks Unity Catalog Volumes

Project description

dvc-databricks

A DVC remote storage plugin that enables data versioning on Databricks Unity Catalog Volumes.

Store large data files on Databricks Volumes (backed by S3 or ADLS), keep only lightweight .dvc pointer files in your git repository, and use standard DVC commands — no custom code required.

dvc push   # uploads data to Databricks Volume via Databricks SDK
dvc pull   # downloads data from Databricks Volume

Why this plugin?

Databricks Unity Catalog Volumes cannot be accessed like a plain S3 bucket — all I/O should go through the Databricks Files API. This plugin bridges DVC and the Databricks SDK so you can version and share datasets stored on Volumes without ever leaving the standard DVC workflow.


Requirements

  • Python >= 3.10
  • DVC >= 3.0
  • Databricks CLI configured with a profile in ~/.databrickscfg
  • Access to a Databricks Unity Catalog Volume

Installation

pip install dvc-databricks

Once installed, the dbvol:// remote protocol is automatically available to DVC in every process — no imports or additional configuration needed.


Setup

1. Initialize DVC in your repository (if not already done)

dvc init
git add .dvc
git commit -m "initialize DVC"

2. Add the Databricks Volume as a DVC remote

dvc remote add -d myremote \
    dbvol:///Volumes/<catalog>/<schema>/<volume>/<path>

Example:

dvc remote add -d myremote \
    dbvol:///Volumes/ml_catalog/datasets/storage/dvc_cache

3. Set your Databricks profile

export DATABRICKS_CONFIG_PROFILE=<your-profile-name>

Note: DVC remotes do not support arbitrary config keys, so the Databricks profile must be provided via this environment variable — it cannot be stored in .dvc/config. Add the export to your ~/.zshrc or ~/.bashrc to make it permanent.


Usage

Track a data file

dvc add data/dataset.csv

This creates data/dataset.csv.dvc — a small pointer file that goes into git. The actual data file must listed in .gitignore.

Push data to the Volume

dvc push

Uploads the file to your Databricks Volume via the Databricks SDK.

Commit the pointer to git

git add data/dataset.csv.dvc .gitignore
git commit -m "track dataset v1 with DVC"
git push

Pull data in another environment

git clone <your-repo>
pip install dvc-databricks
export DATABRICKS_CONFIG_PROFILE=<your-profile-name>
dvc pull

CLI — dvc-databricks add

The dvc-databricks add command recursively finds files under a directory and tracks each one with DVC, creating one .dvc pointer file per file. The full folder structure is preserved in git, which allows granular pulls by file or subfolder — unlike DVC's built-in dvc add <dir>, which creates a single .dvc file for the whole directory.

Syntax

dvc-databricks add <path> [--include EXT ...] [--exclude EXT ...]

Arguments

Argument Description
path Root directory to scan recursively (required).
--include EXT ... Whitelist — only track files with these extensions. Accepts multiple values.
--exclude EXT ... Blacklist — always skip files with these extensions. Accepts multiple values. Takes precedence over --include.

Extensions can be written with or without a leading dot (.csv and csv are equivalent) and are matched case-insensitively.

Filter logic

  • --include is a whitelist: only files whose extension is in the list are tracked.
  • --exclude is a blacklist: files whose extension is in the list are always skipped.
  • When both are provided, --exclude takes precedence over --include.
  • When neither is provided, all files are tracked.

Examples

# Track only CSV and JSON files
dvc-databricks add /path/to/dataset --include .csv .json

# Track all files except macOS artifacts and temp files
dvc-databricks add /path/to/dataset --exclude .DS_Store .tmp .log

# Only CSVs, but skip .DS_Store even if --include .csv is set
dvc-databricks add /path/to/dataset --include .csv --exclude .DS_Store

# Track all files with no filters
dvc-databricks add /path/to/dataset

After running

git add .
git commit -m "track dataset file by file"
dvc push
  • One .dvc pointer file is created next to each tracked data file.
  • Each directory containing tracked files gets a .gitignore that excludes the raw data files from git.
  • dvc push uploads all tracked files to the configured Databricks Volume.

How it works

Your git repo                   Databricks Volume (S3 / ADLS)
──────────────────              ───────────────────────────────────
data/dataset.csv.dvc  ──────►  /Volumes/catalog/schema/vol/
.dvc/config                     └── files/md5/
                                    ├── ab/cdef1234...   ← actual data
                                    └── 9f/123abc...     ← actual data

dvc|dvc-databricks add hashes the file and stores it in the local DVC cache (.dvc/cache). A .dvc pointer file containing the MD5 hash is created next to your data file.

dvc push uploads from the local cache to the Volume using the Databricks Files API (WorkspaceClient.files.upload). Files are stored content-addressed: <volume_path>/files/md5/<hash[:2]>/<hash[2:]>.

dvc pull downloads from the Volume into the local cache, then restores the file to its original path.

Only .dvc pointer files are ever committed to git — the data stays on the Volume.


Architecture

The plugin follows the same pattern as official DVC plugins:

Class Base Role
DatabricksVolumesFileSystem dvc_objects.FileSystem DVC-facing layer: config, checksum strategy, dependency check
_DatabricksVolumesFS fsspec.AbstractFileSystem I/O layer: all Databricks SDK calls

A .pth file installed into site-packages ensures the plugin is loaded at Python startup in every process (including DVC CLI subprocesses), without requiring any manual imports.


Environment variables

Variable Description
DATABRICKS_CONFIG_PROFILE Databricks CLI profile name from ~/.databrickscfg. Falls back to the default profile if not set.

License

MIT © Óscar Reyes

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dvc_databricks-1.2.3.tar.gz (176.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dvc_databricks-1.2.3-py3-none-any.whl (14.3 kB view details)

Uploaded Python 3

File details

Details for the file dvc_databricks-1.2.3.tar.gz.

File metadata

  • Download URL: dvc_databricks-1.2.3.tar.gz
  • Upload date:
  • Size: 176.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for dvc_databricks-1.2.3.tar.gz
Algorithm Hash digest
SHA256 f462e351c607bd13ba37e3514ccdc2a28fccf6e33af84c13b891a611bd2ca3a2
MD5 6469f1c323e368ec3c469091f2df4c9a
BLAKE2b-256 1fe09ef3f83fc263d8fbf5924dd72b6730de0a755aed57969b6fcd9d6654dfb9

See more details on using hashes here.

File details

Details for the file dvc_databricks-1.2.3-py3-none-any.whl.

File metadata

  • Download URL: dvc_databricks-1.2.3-py3-none-any.whl
  • Upload date:
  • Size: 14.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for dvc_databricks-1.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 37dca4488cc7d768499fae8b6a4ddef0c3e4eb9f5713ab93585ce77fb639bcb5
MD5 7cc19a8a039e1ccb2f6a5fec5686705b
BLAKE2b-256 a29c8e9c0e55c3fc99b9cba041ebaa386877c8a42ab55c7f4f66dde8d4d8b8be

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page