Skip to main content

A simple util to get a spark and mlflow session objects from an .env file

Project description

Databricks Session Util

A simple utility for spark and mlflow session objects

Setup

Quick Install

python -m pip install databricks_session

Build from source

Clone the repository

git clone https://github.com/Broomva/databricks_session.git

Install the package

cd databricks_session && make install

Build manually

After cloning, create a virtual environment

conda create -n databricks_session python=3.10
conda activate databricks_session

Install the requirements

pip install -r requirements.txt

Run the python installation

python setup.py install

Usage

The deployment requires a .env file created under local folder:

touch .env

It should have a schema like this:

databricks_experiment_name=''
databricks_experiment_id=''
databricks_host=''
databricks_token=''
databricks_username=''
databricks_password=''
databricks_cluster_id=''
import databricks_session 

# Create a Snowpark session
spark = DatabricksSparkSession().get_session()

# Connect to MLFLow Artifact Server
mlflow_session = DatabricksMLFlowSession().get_session()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

databricks_session-0.0.1.tar.gz (12.9 kB view hashes)

Uploaded Source

Built Distribution

databricks_session-0.0.1-py2.py3-none-any.whl (4.0 kB view hashes)

Uploaded Python 2 Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page