A simple util to get a spark and mlflow session objects from an .env file
Project description
Databricks Session Util
A simple utility for spark and mlflow session objects
Setup
Quick Install
python -m pip install databricks_session
Build from source
Clone the repository
git clone https://github.com/Broomva/databricks_session.git
Install the package
cd databricks_session && make install
Build manually
After cloning, create a virtual environment
conda create -n databricks_session python=3.10
conda activate databricks_session
Install the requirements
pip install -r requirements.txt
Run the python installation
python setup.py install
Usage
The deployment requires a .env file created under local folder:
touch .env
It should have a schema like this:
databricks_experiment_name=''
databricks_experiment_id=''
databricks_host=''
databricks_token=''
databricks_username=''
databricks_password=''
databricks_cluster_id=''
import databricks_session
# Create a Snowpark session
spark = DatabricksSparkSession().get_session()
# Connect to MLFLow Artifact Server
mlflow_session = DatabricksMLFlowSession().get_session()
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
databricks_session-0.1.10.tar.gz
(13.5 kB
view hashes)
Built Distribution
Close
Hashes for databricks_session-0.1.10.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 26a36c19c1497b34eb09a3434f102c94b449c50cb1ef4048c7d3df0819752915 |
|
MD5 | 8b3dc62e296908a7be6085b8133bd49f |
|
BLAKE2b-256 | 28e3ae22d8a6606cbedd7535ed8cbf7c0f45332dac7fea103d3e6786354b56a5 |
Close
Hashes for databricks_session-0.1.10-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4b351b447afe1bc65fe9fd823c8635ddf5d7ca10964044ee7969e7924a28d8c9 |
|
MD5 | 290eebe642e3f3e3ca3c00f8dfa02f05 |
|
BLAKE2b-256 | 4331defdbc7146a5ca2ab6a8db057d5f00d0aefdfa48de3a1d67759aeee83f81 |