Skip to main content

Python SDK for programmatic access to the Zipher API

Project description

Zipher SDK

The Zipher SDK is a Python library for interacting with Zipher's APIs.

Package Installation

You can install the Zipher SDK using pip:

pip install zipher-sdk

Providing Zipher with access to Databricks workspace

After installing zipher-sdk package a cli tool to automatically create all necessary resources for Zipher is available.

Setting up credentials

You need to provide credentials that will be used to create all necessary resources and permissions for Zipher.

Here are ways to set up credentials:

  • .databrickscfg config (cli tool supports profile choice as an argument)
  • ZIPHER_DATABRICKS_HOST and ZIPHER_DATABRICKS_TOKEN or ZIPHER_DATABRICKS_CLIENT_ID, ZIPHER_DATABRICKS_CLIENT_SECRET environment variables
  • providing credentials as arguments to cli tool

Cli tool usage examples

Providing Zipher with access to a list of jobs

zipher setup --jobs-list 12345678,87654321,12344321,21436587

Providing Zipher with access to n jobs from the workspace

zipher setup --max-jobs 50

Providing Zipher with readonly access to a list of jobs

zipher setup --readonly --jobs-list 12345678,87654321,12344321,21436587

Full cli tool specification

usage: zipher setup [-h] [--workspace-host WORKSPACE_HOST] [--access-token ACCESS_TOKEN] [--client-id CLIENT_ID] [--client-secret CLIENT_SECRET] [--profile PROFILE] [--verbose] [--jobs-list JOBS_LIST] [--max-jobs MAX_JOBS]
                    [--max-runs MAX_RUNS] [--days-back DAYS_BACK] [--readonly] [--pat] [--skip-approval]

options:
  -h, --help            show this help message and exit
  --workspace-host WORKSPACE_HOST
                        Databricks workspace host URL.
  --access-token ACCESS_TOKEN
                        Databricks workspace access token.
  --client-id CLIENT_ID
                        Databricks workspace OAuth client id.
  --client-secret CLIENT_SECRET
                        Databricks workspace OAuth client secret.
  --profile PROFILE     Profile name from .databrickscfg.
  --verbose             Print full error message on fail.
  --jobs-list JOBS_LIST
                        Comma-separated list of jobs ids to provide access to.
  --max-jobs MAX_JOBS   Maximum number of jobs to consider when iterating over jobs to grant permissions (default: 2000).
  --max-runs MAX_RUNS   Maximum number of runs to consider when iterating over runs to grant permissions to relative jobs (default: 2000).
  --days-back DAYS_BACK
                        How many days back to fetch relevant job runs for permission updates (default: 7).
  --readonly            Provide Zipher with only CAN_VIEW permissions on listed jobs. When not provided will default to CAN_MANAGE permissions.
  --pat                 Generate Personal Access Token for Zipher instead of default OAuth client creds.
  --skip-approval       Skip user input approval.

SDK Usage

Here are some basic examples of how you can use the Zipher SDK to optimize your databricks clusters using Zipher's ML-powered optimization engine:

Update Existing Configuration

You can update an existing configuration by initializing a zipher Client and sending a JSON payload to the update_existing_conf function. Here's how you can do it:

from zipher import Client

client = Client(customer_id="my_customer_id")  # assuming the zipher API key is stored in ZIPHER_API_KEY environment variable

# Your existing cluster config:
config_payload = {
    "new_cluster": {
        "autoscale": {
            "min_workers": 1,
            "max_workers": 30
        },
        "cluster_name": "my-cluster",
        "spark_version": "10.4.x-scala2.12",
        "spark_conf": {
            "spark.driver.maxResultSize": "4g"
        },
        "aws_attributes": {
            "first_on_demand": 0,
            "availability": "SPOT",
            "zone_id": "auto",
            "spot_bid_price_percent": 100,
            "ebs_volume_count": 0
        },
        "node_type_id": "rd-fleet.2xlarge",
        "driver_node_type_id": "rd-fleet.xlarge",
        "spark_env_vars": {},
        "enable_elastic_disk": "false"
    }
}

# Update configuration
optimized_cluster = client.update_existing_conf(job_id="my-job-id", existing_conf=config_payload)

# Continue with sending the optimized configuration to Databricks via the Databricks python SDK, Airflow operator, etc.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zipher-sdk-0.3.0.tar.gz (20.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zipher_sdk-0.3.0-py3-none-any.whl (22.8 kB view details)

Uploaded Python 3

File details

Details for the file zipher-sdk-0.3.0.tar.gz.

File metadata

  • Download URL: zipher-sdk-0.3.0.tar.gz
  • Upload date:
  • Size: 20.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.10.1

File hashes

Hashes for zipher-sdk-0.3.0.tar.gz
Algorithm Hash digest
SHA256 09bbfef17a62ab85f6ea83903a6a7e1492c16468ffef16de548b6c98b05d5651
MD5 cf66cd84816a79b94ed2761f5ed8eec2
BLAKE2b-256 ad33268dff2f357e5ef3f61b084dbbd1047b40836416a4ae5e9d2c27437f33cd

See more details on using hashes here.

File details

Details for the file zipher_sdk-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: zipher_sdk-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 22.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.10.1

File hashes

Hashes for zipher_sdk-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8dfb0a213922b04d5cda51a65e0f511b74ee750d980c1c05844ac7e6f61d6115
MD5 4b0a218a097b53b3c10aa84cb4c3e91a
BLAKE2b-256 4c9b2f40bbe26b09072b0add1a1607328a8b9671d88d0a91d0f7f17ecdffbed4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page