TF-Slim models (Guild AI)

## Project description

gpkg.slim.models

################

*TF-Slim models (Guild AI)*

Models

######

images

======

*Generic images dataset*

Operations

^^^^^^^^^^

prepare

-------

*Prepare images for training*

Flags

`````

**images**

*Directory containing images to prepare (required)*

**random-seed**

*Seed used for train/validation split (randomly generated)*

**val-split**

*Percentage of images reserved for validation (30)*

inception

=========

*TF-Slim Inception v1 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-resnet-v2

===================

*TF-Slim Inception ResNet v2 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-v2

============

*TF-Slim Inception v2 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-v3

============

*TF-Slim Inception v3 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-v4

============

*TF-Slim Inception v4 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

mobilenet

=========

*TF-Slim Mobilenet v1 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

mobilenet-v2-1.4

================

*TF-Slim Mobilenet v2 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

nasnet-large

============

*TF-Slim NASNet large classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

nasnet-mobile

=============

*TF-Slim NASNet mobile classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

pnasnet-large

=============

*TF-Slim PNASNet classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

pnasnet-mobile

==============

*TF-Slim PNASNet mobile classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-101

==========

*TF-Slim ResNet v1 101 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-152

==========

*TF-Slim ResNet v1 152 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-50

=========

*TF-Slim ResNet v1 50 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-v2-101

=============

*TF-Slim ResNet v2 101 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-v2-152

=============

*TF-Slim ResNet v2 152 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-v2-50

============

*TF-Slim ResNet v2 50 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

vgg-16

======

*TF-Slim VGG 16 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

vgg-19

======

*TF-Slim VGG 19 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

################

*TF-Slim models (Guild AI)*

Models

######

images

======

*Generic images dataset*

Operations

^^^^^^^^^^

prepare

-------

*Prepare images for training*

Flags

`````

**images**

*Directory containing images to prepare (required)*

**random-seed**

*Seed used for train/validation split (randomly generated)*

**val-split**

*Percentage of images reserved for validation (30)*

inception

=========

*TF-Slim Inception v1 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-resnet-v2

===================

*TF-Slim Inception ResNet v2 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-v2

============

*TF-Slim Inception v2 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-v3

============

*TF-Slim Inception v3 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

inception-v4

============

*TF-Slim Inception v4 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

mobilenet

=========

*TF-Slim Mobilenet v1 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

mobilenet-v2-1.4

================

*TF-Slim Mobilenet v2 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

nasnet-large

============

*TF-Slim NASNet large classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

nasnet-mobile

=============

*TF-Slim NASNet mobile classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

pnasnet-large

=============

*TF-Slim PNASNet classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

pnasnet-mobile

==============

*TF-Slim PNASNet mobile classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-101

==========

*TF-Slim ResNet v1 101 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-152

==========

*TF-Slim ResNet v1 152 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-50

=========

*TF-Slim ResNet v1 50 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-v2-101

=============

*TF-Slim ResNet v2 101 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-v2-152

=============

*TF-Slim ResNet v2 152 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

resnet-v2-50

============

*TF-Slim ResNet v2 50 layer classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

vgg-16

======

*TF-Slim VGG 16 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

vgg-19

======

*TF-Slim VGG 19 classifier*

Operations

^^^^^^^^^^

evaluate

--------

*Evaluate a trained model*

Flags

`````

**batch-size**

*Number of examples in each evaluated batch (100)*

**eval-batches**

*Number of batches to evaluate (all available)*

**step**

*Checkpoint step to evaluate (latest checkpoint)*

export-and-freeze

-----------------

*Export an inference graph with checkpoint weights*

Flags

`````

**step**

*Checkpoint step to use for the frozen graph (latest checkpoint)*

finetune

--------

*Finetune a trained model*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.0001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

label

-----

*Classify an image using a trained model*

Flags

`````

**image**

*Path to image to classify (required)*

tflite

------

*Generate a TFLite file from a frozen graph*

Flags

`````

**output-format**

*TF Lite output format (tflite)

Choices:

tflite

graphviz_dot

*

**quantized**

*Whether or not output arrays are quantized (no)

Choices:

yes

no

*

**quantized-inputs**

*Whether or not input arrays are quantized (no)

Choices:

yes

no

*

train

-----

*Train model from scratch*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

transfer-learn

--------------

*Train model using transfer learning*

Flags

`````

**auto-scale**

*Adjust applicable flags for multi-GPU systems (yes)

Set to 'no' to disable any flag value adjustments.

When this value is 'yes' (the default) the following flags are adjusted on

multi-GPU systems:

- clones

- learning-rate

`clones` is set to the number of available GPUs.

`learning-rate` is adjusted by multiplying its specified value by the

number of GPUs.

Flags are not adjusted on single GPU or CPU only systems.

*

**batch-size**

*Number of examples in each training batch (32)*

**clones**

*Number of model clones (calculated)

This value is automatically set to the number of available GPUs if `auto-

scale` is 'yes'.

When `auto-scale` is 'no' this value can be increased from 1 to train the

model in parallel on multiple GPUs.

*

**learning-rate**

*Initial learning rate (0.001)*

**learning-rate-decay-epochs**

*Number of epochs after which learning rate decays (2.0)*

**learning-rate-decay-factor**

*Learning rate decay factor (0.94)*

**learning-rate-decay-type**

*Method used to decay the learning rate (exponential)

Choices:

exponential

fixed

polynomial

*

**learning-rate-end**

*Minimal learning rate used by polynomial learning rate decay (0.0001)*

**log-save-seconds**

*Frequency of log summary saves in seconds (60)*

**log-steps**

*Frequency of summary logs in steps (100)*

**model-save-seconds**

*Frequency of model saves (checkpoints) in seconds (600)*

**optimizer**

*Optimizer used to train (rmsprop)

Choices:

adadelta

adagrad

adam

ftrl

momentum

rmsprop

sgd

*

**preprocessing**

*Preprocessing to use (default for model)*

**preprocessors**

*Number of preprocessing threads (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize the

preprocessor thread count for the system.

*

**readers**

*Number of parallel data readers (calculated)

This value is automatically set to logical CPU count / 2 if `auto-scale`

is 'yes'.

When `auto-scale` is 'no' this value can be set to optimize data reader

performance for the system.

*

**train-steps**

*Number of steps to train (train indefinitely)*

**weight-decay**

*Decay on the model weights (4e-05)*

## Project details

## Release history Release notifications

## Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help | File type | Python version | Upload date |
---|---|---|---|

gpkg.slim.models-0.5.1-py2.py3-none-any.whl (7.6 kB) Copy SHA256 hash SHA256 | Wheel | py2.py3 |