Skip to main content

Local Databricks Development Bridge - Intercept Spark operations for local Unity Catalog access

Project description

MangledDLT - Local Databricks Development Bridge

MangledDLT enables developers to write and test Databricks code locally by intercepting Spark operations and fetching data from remote Unity Catalog environments. Write your PySpark code once and run it anywhere - locally or on Databricks - without changes.

Features

  • Transparent Spark Interception: Automatically intercepts spark.read.table() and spark.readStream.table() calls
  • Unity Catalog Integration: Fetches data directly from remote Unity Catalog tables
  • Smart Caching: LRU cache with TTL for improved development performance
  • Multiple Auth Methods: Supports PAT, OAuth, and Service Principal authentication
  • Zero Code Changes: Same code works locally and on Databricks
  • Connection Pooling: Efficient connection management for better performance
  • Error Recovery: Automatic retry with exponential backoff

Installation

pip install MangledDlt

Or with all dependencies:

pip install MangledDlt[all]

Quick Start

from pyspark.sql import SparkSession
from mangledlt import MangledDLT

# Create Spark session as usual
spark = SparkSession.builder \
    .appName("LocalDev") \
    .getOrCreate()

# Enable MangledDLT
mdlt = MangledDLT()
mdlt.enable()

# Now you can read from Unity Catalog!
df = spark.read.table("main.default.customers")
df.show()

# When done, disable interception
mdlt.disable()

Configuration

Using Environment Variables

export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_TOKEN="dapi..."
export DATABRICKS_WAREHOUSE_ID="your-warehouse-id"

Using Databricks CLI Config

# Configure Databricks CLI
databricks configure --token

# MangledDLT will automatically use your configuration

Using Custom Config

from mangledlt import MangledDLT

config = {
    "host": "https://workspace.cloud.databricks.com",
    "token": "your-token",
    "warehouse_id": "warehouse-id",
    "cache_enabled": True,
    "cache_ttl": 600  # 10 minutes
}

mdlt = MangledDLT(config=config)
mdlt.enable()

Development vs Production

from pyspark.sql import SparkSession
from mangledlt import MangledDLT

spark = SparkSession.builder.appName("MyApp").getOrCreate()

# Auto-detect environment
if not spark.conf.get("spark.databricks.service.clusterId"):
    # Running locally - enable MangledDLT
    mdlt = MangledDLT()
    mdlt.enable()
    print("Running locally with MangledDLT")
else:
    print("Running on Databricks")

# Your code works the same in both environments
customers = spark.read.table("catalog.schema.customers")
orders = spark.read.table("catalog.schema.orders")
result = customers.join(orders, "customer_id")
result.show()

Caching

MangledDLT includes intelligent caching to speed up iterative development:

mdlt = MangledDLT(config={
    "cache_enabled": True,
    "cache_ttl": 1800,  # 30 minutes
    "cache_max_size": 100  # Max 100 cached queries
})
mdlt.enable()

# First read - fetches from Unity Catalog
df1 = spark.read.table("catalog.schema.large_table")  # Takes 5 seconds

# Subsequent reads - served from cache
df2 = spark.read.table("catalog.schema.large_table")  # Takes <100ms

# Check cache statistics
stats = mdlt.get_cache_stats()
print(f"Cache hits: {stats['hits']}")
print(f"Hit rate: {stats['hit_rate']}%")

# Clear cache when needed
mdlt.clear_cache()

Error Handling

from mangledlt import MangledDLT
from mangledlt.exceptions import AuthError, TableNotFoundError

try:
    mdlt = MangledDLT()
    mdlt.enable()

    df = spark.read.table("catalog.schema.table")
    df.show()

except AuthError as e:
    print(f"Authentication failed: {e}")
    print("Please check your Databricks credentials")

except TableNotFoundError as e:
    print(f"Table not found: {e}")
    print("Please verify the table exists and you have access")

Multiple Workspaces

from mangledlt import MangledDLT
from mangledlt.config import Config

# Connect to development workspace
dev_config = Config.from_file(profile="DEV")
dev_mdlt = MangledDLT(config=dev_config)
dev_mdlt.enable()

# Read from dev
dev_data = spark.read.table("dev_catalog.schema.table")

# Switch to production
dev_mdlt.disable()
prod_config = Config.from_file(profile="PROD")
prod_mdlt = MangledDLT(config=prod_config)
prod_mdlt.enable()

# Read from production
prod_data = spark.read.table("prod_catalog.schema.table")

API Reference

MangledDLT

Main class for enabling local Databricks development.

  • __init__(config=None): Initialize with optional configuration
  • enable(): Enable Spark operation interception
  • disable(): Disable interception
  • get_status(): Get connection status
  • clear_cache(): Clear query cache
  • get_cache_stats(): Get cache statistics

Config

Configuration management class.

  • from_file(path, profile): Load from Databricks CLI config
  • from_env(): Load from environment variables
  • validate(): Validate configuration

Exceptions

  • ConfigError: Configuration issues
  • AuthError: Authentication failures
  • ConnectionError: Connection problems
  • TableNotFoundError: Table doesn't exist
  • PermissionError: Insufficient permissions
  • InvalidReferenceError: Invalid table reference format

Requirements

  • Python 3.9+
  • PySpark 3.4+ (user must install separately)
  • databricks-sql-connector 2.9+

Development

# Clone the repository
git clone https://github.com/mangledlt/mangledlt.git
cd mangledlt

# Install in development mode
pip install -e .[dev]

# Run tests
pytest tests/

License

MIT License - see LICENSE file for details.

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mangleddlt-0.1.1.tar.gz (34.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mangleddlt-0.1.1-py3-none-any.whl (40.2 kB view details)

Uploaded Python 3

File details

Details for the file mangleddlt-0.1.1.tar.gz.

File metadata

  • Download URL: mangleddlt-0.1.1.tar.gz
  • Upload date:
  • Size: 34.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for mangleddlt-0.1.1.tar.gz
Algorithm Hash digest
SHA256 50d5dcd7c0ae1ab34d60e06bf4dc8eedcc9cfe82536e44a484c41fb56d52372a
MD5 72927b33d2a644bcb4e834ad0b909b11
BLAKE2b-256 329a02cf6f4170a4ed9fdd9a0fed5bc202573fdbc28efd1e8ee3e8c58923c83b

See more details on using hashes here.

File details

Details for the file mangleddlt-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: mangleddlt-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 40.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.18

File hashes

Hashes for mangleddlt-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d7781a0d5e2165e54d22b1b15924fc6c67d41271d190ebe5c8124508ba7ae1c3
MD5 8cb8656d79ff55d177190e2bd2686f75
BLAKE2b-256 92c5e21115c1ff3f7bbc95630a44cac66d740b3aa2cdb9ceff5b6abd5ad2f9d6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page