Skip to main content
Help us improve Python packaging – donate today!

Census models for Cloud ML

Project Description


Models
######

census-dnn
##########

*Wide and deep classifier for census income data using TensorFlow estimators*

Operations
==========

cloudml-batch-predict
^^^^^^^^^^^^^^^^^^^^^

*Submit a prediction job to Cloud ML*

Flags
-----

**bucket**
*Google Cloud Storage bucket used to store run data (required)*

**deployed-model**
*Run ID associated with the deployed model (default is latest cloudml-
resource run)*

**instance-type**
*Instance type (if type cannot be inferred from instances file name)*

**instances**
*Instances to use for prediction (default is 'prediction-samples.json')*

**job-name**
*Job name to submit (default is generated using the predction run ID)*

**output-format**
*Format of the prediction output (see
https://cloud.google.com/sdk/gcloud/reference/ for supported values)*

**region**
*Region prediction job is submitted to (default is 'us-central1')*

cloudml-deploy
^^^^^^^^^^^^^^

*Deploy a model to Cloud ML*

Flags
-----

**bucket**
*Google Cloud Storage bucket used to store model binaries (required if
'model-binaries' is not specified)*

**config**
*Path to a Cloud ML job configuration file*

**model**
*Name of the deployed Cloud ML model (default is generated using the run
model)*

**model-binaries**
*Google Cloud Storage path to store model binaries (required if 'bucket'
is not specified)*

**runtime-version**
*TensorFlow runtime version (default is '1.4')*

**trained-model**
*Run ID associated with the trained model (default is latest cloudml-train
run)*

**version**
*Name of the deployed Cloud ML version (default is generated using the
current timestamp)*

cloudml-hptune
^^^^^^^^^^^^^^

*Optimize model hyperparameters in Cloud ML*



A config file defining a hyperparameter spec is required for this operation.
See https://goo.gl/aZQYCe for spec details.



You may override values in the config using flags. See support config flags
below for more information.

Flags
-----

**bucket**
*Google Cloud Storage bucket used to store run data (required)*

**config**
*Hyperparameter tuning configuration (default is 'hptuning_config.yaml')*

**embedding-size**
*Number of embedding dimensions for categorical columns (default is 8)*

**epochs**
*Number of training data epochs If both train-steps and epochs are
specified, the training job will run for train-steps or epochs, whichever
occurs first. If unspecified will run for train-steps.*

**eval-batch-size**
*Batch size for evaluation steps (default is 40)*

**eval-steps**
*Number of steps to run evalution for at each checkpoint (default is 100)*

**export-format**
*The input format of the exported SavedModel binary Values may be JSON,
CSV, or EXAMPLE. (default is 'JSON')*

**first-layer-size**
*Number of nodes in the first layer of the DNN (default is 100)*

**job-name**
*Job name to submit (default is generated using the training run ID)*

**layers**
*Number of layers in the DNN (default is 4)*

**max-parallel-trials**
*Maximum number of parallel trials for hyperparameter tuning

Overrides maxParallelTrials in config.*

**max-trials**
*Maximum number of trials for hyperparameter tuning

Overrides maxTrials in config.*

**module-name**
*Training module (default is 'trainer/task')*

**region**
*Region traning job is submitted to (default is 'us-central1')*

**resume-from**
*Resume hyperparameter tuning using the results of previous cloudml-hptune
operation

Use the ID of the run you want to resume from.*

**runtime-version**
*TensorFlow runtime version (default is '1.4')*

**scale-factor**
*How quickly the size of the layers in the DNN decay (default is 0.7)*

**scale-tier**
*Cloud ML resources allocated to a training job

Use STANDARD_1 for many workers and a few parameter servers.

Use PREMIUM_1 for a large number of workers with many parameter servers.

Use BASIC_GPU for a single worker instance with a GPU. (default is
'BASIC')*

**train-batch-size**
*Batch size for training steps (default is 40)*

**train-steps**
*Steps to run the training job for (default is 1000)*

**verbosity**
*Log level (use DEBUG for more information) Values may be DEBUG, INFO,
WARN, ERROR, or FATAL. (default is 'INFO')*

cloudml-predict
^^^^^^^^^^^^^^^

*Send a prediction request to Cloud ML*

Flags
-----

**deployed-model**
*Run ID associated with the deployed model (default is latest cloudml-
resource run)*

**instance-type**
*Instance type (if type cannot be inferred from instances file name)*

**instances**
*Instances to use for prediction (default is 'prediction-samples.json')*

**output-format**
*Format of the prediction output (see
https://cloud.google.com/sdk/gcloud/reference/ for supported values)*

cloudml-train
^^^^^^^^^^^^^

*Train a model in Cloud ML*

Flags
-----

**bucket**
*Google Cloud Storage bucket used to store run data (required)*

**config**
*Path to a Cloud ML job configuration file*

**embedding-size**
*Number of embedding dimensions for categorical columns (default is 8)*

**epochs**
*Number of training data epochs If both train-steps and epochs are
specified, the training job will run for train-steps or epochs, whichever
occurs first. If unspecified will run for train-steps.*

**eval-batch-size**
*Batch size for evaluation steps (default is 40)*

**eval-steps**
*Number of steps to run evalution for at each checkpoint (default is 100)*

**export-format**
*The input format of the exported SavedModel binary Values may be JSON,
CSV, or EXAMPLE. (default is 'JSON')*

**first-layer-size**
*Number of nodes in the first layer of the DNN (default is 100)*

**job-name**
*Job name to submit (default is generated using the training run ID)*

**layers**
*Number of layers in the DNN (default is 4)*

**module-name**
*Training module (default is 'trainer/task')*

**region**
*Region traning job is submitted to (default is 'us-central1')*

**runtime-version**
*TensorFlow runtime version (default is '1.4')*

**scale-factor**
*How quickly the size of the layers in the DNN decay (default is 0.7)*

**scale-tier**
*Cloud ML resources allocated to a training job

Use STANDARD_1 for many workers and a few parameter servers.

Use PREMIUM_1 for a large number of workers with many parameter servers.

Use BASIC_GPU for a single worker instance with a GPU. (default is
'BASIC')*

**train-batch-size**
*Batch size for training steps (default is 40)*

**train-steps**
*Steps to run the training job for (default is 1000)*

**verbosity**
*Log level (use DEBUG for more information) Values may be DEBUG, INFO,
WARN, ERROR, or FATAL. (default is 'INFO')*

train
^^^^^

*Train the classifier locally*

Flags
-----

**embedding-size**
*Number of embedding dimensions for categorical columns (default is 8)*

**epochs**
*Number of training data epochs If both train-steps and epochs are
specified, the training job will run for train-steps or epochs, whichever
occurs first. If unspecified will run for train-steps.*

**eval-batch-size**
*Batch size for evaluation steps (default is 40)*

**eval-steps**
*Number of steps to run evalution for at each checkpoint (default is 100)*

**export-format**
*The input format of the exported SavedModel binary Values may be JSON,
CSV, or EXAMPLE. (default is 'JSON')*

**first-layer-size**
*Number of nodes in the first layer of the DNN (default is 100)*

**layers**
*Number of layers in the DNN (default is 4)*

**scale-factor**
*How quickly the size of the layers in the DNN decay (default is 0.7)*

**train-batch-size**
*Batch size for training steps (default is 40)*

**train-steps**
*Steps to run the training job for (default is 1000)*

**verbosity**
*Log level (use DEBUG for more information) Values may be DEBUG, INFO,
WARN, ERROR, or FATAL. (default is 'INFO')*



Release history Release notifications

This version
History Node

0.3.1

History Node

0.3.1.dev1

History Node

0.3.0

History Node

0.3.0.dev1

History Node

0.1.0.dev7

History Node

0.1.0.dev6

History Node

0.1.0.dev5

History Node

0.1.0.dev4

History Node

0.1.0.dev2

History Node

0.1.0.dev1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
gpkg.cloudml.census-0.3.1-py2.py3-none-any.whl (8.7 kB) Copy SHA256 hash SHA256 Wheel py2.py3 Mar 28, 2018

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page