Skip to main content

Gradient Utils

Project description

GitHubSplash

Gradient Utils

PyPI codecov


Get started: Create AccountInstall CLITutorialsDocs

Resources: WebsiteBlogSupportContact Sales


Gradient is an an end-to-end MLOps platform that enables individuals and organizations to quickly develop, train, and deploy Deep Learning models. The Gradient software stack runs on any infrastructure e.g. AWS, GCP, on-premise and low-cost Paperspace GPUs. Leverage automatic versioning, distributed training, built-in graphs & metrics, hyperparameter search, GradientCI, 1-click Jupyter Notebooks, our Python SDK, and more.

This is an SDK for performing Machine Learning with Gradientº, it can be installed in addition to gradient-cli.

Requirements

This SDK requires Python 3.5+.

To install it, run:

pip install gradient-utils

Usage

Multinode Helper Functions

Multinode GRPC Tensorflow

Set the TF_CONFIG environment variable For multi-worker training, you need to set the TF_CONFIG environment variable for each binary running in your cluster. Set the value of TF_CONFIG to a JSON string that specifies each task within the cluster, including each task's address and role within the cluster. We've provided a Kubernetes template in the tensorflow/ecosystem repo which sets TF_CONFIG for your training tasks.

get_tf_config()

Function to set value of TF_CONFIG when run on machines within Paperspace infrastructure.

It can raise a ConfigError exception with message if there's a problem with its configuration in a particular machine.

Usage example:

from gradient_utils import get_tf_config

get_tf_config()

Hyperparameter Tuning

Currently, Gradientº only supports Hyperopt for Hyperparameter Tuning.

hyper_tune()

Function to run hyperparameter tuning.

It accepts the following arguments:

  • train_model User model to tune.
  • hparam_def User definition (scope) of search space. To set this value, refer to hyperopt documentation.
  • algo Search algorithm. Default: tpe.suggest (from hyperopt).
  • max_ecals Maximum number of function evaluations to allow before returning. Default: 25.
  • func Function to be run by hyper tune. Default: fmin (from hyperopt). Do not change this value if you do not know what you are doing!

It returns a dict with information about the tuning process.

It can raise a ConfigError exception with message if there's no connection to MongoDB.

Note: You do not need to worry about setting your MongoDB version; it will be set within Paperspace infrastructure for hyperparameter tuning.

Usage example:

from gradient_utils import hyper_tune

# Prepare model and search scope

# minimal version
argmin1 = hyper_tune(model, scope)

# pass more arguments
argmin2 = hyper_tune(model, scope, algo=tpe.suggest, max_evals=100)

Utility Functions

get_mongo_conn_str()

Function to check and construct MongoDB connection string.

It returns a connection string to MongoDB.

It can raise a ConfigError exception with message if there's a problem with any values used to prepare the MongoDB connection string.

Usage example:

from gradient_utils import get_mongo_conn_str

conn_str = get_mongo_conn_str()

data_dir()

Function to retrieve path to job space.

Usage example:

from gradient_utils import data_dir

job_space = data_dir()

model_dir()

Function to retrieve path to model space.

Usage example:

from gradient_utils import model_dir

model_path = model_dir(model_name)

export_dir()

Function to retrieve path for model export.

Usage example:

from gradient_utils import export_dir

model_path = export_dir(model_name)

worker_hosts()

Function to retrieve information about worker hosts.

Usage example:

from gradient_utils import worker_hosts

model_path = worker_hosts()

ps_hosts()

Function to retrieve information about Paperspace hosts.

Usage example:

from gradient_utils import ps_hosts

model_path = ps_hosts()

task_index()

Function to retrieve information about task index.

Usage example:

from gradient_utils import task_index

model_path = task_index()

job_name()

Function to retrieve information about job name.

Usage example:

from gradient_utils import job_name

model_path = job_name()

MetricsLogger

Prometheus wrapper for logging custom metrics

Usage example:

from gradient_utils import MetricsLogger
m_logger = MetricsLogger()
m_logger.add_gauge("some_metric_1")
m_logger["some_metric_1"].set(3)
m_logger["some_metric_1"].inc()

m_logger.add_gauge("some_metric_2")
m_logger["some_metric_2"].set_to_current_time()

m_logger.push_metrics()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gradient_utils-0.1.0.tar.gz (7.8 kB view details)

Uploaded Source

Built Distribution

gradient_utils-0.1.0-py3-none-any.whl (8.7 kB view details)

Uploaded Python 3

File details

Details for the file gradient_utils-0.1.0.tar.gz.

File metadata

  • Download URL: gradient_utils-0.1.0.tar.gz
  • Upload date:
  • Size: 7.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.2.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.2

File hashes

Hashes for gradient_utils-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e7d73bb7c006e586eb711094a7aedfc0c61cce542c2036969ef81023a1c34859
MD5 5e51d9972bc6b7af25f1873fe35c2e50
BLAKE2b-256 c4c619a2c6e5a9ac72ab67ddad3b4d92fb44eccc84056a20ad58be61519bca53

See more details on using hashes here.

File details

Details for the file gradient_utils-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: gradient_utils-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 8.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/46.2.0 requests-toolbelt/0.9.1 tqdm/4.46.0 CPython/3.8.2

File hashes

Hashes for gradient_utils-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e0907c8dc85427825dddfbfc0427668a5b85f03094c01ce672c722069620aa66
MD5 9b8d63a2d74d07e46d70b6c18c5c6f78
BLAKE2b-256 f45a5675e2021da5b88f219ad37d4174b78fa5dd9ad4a1f3b71177c5102fa50c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page