TF-Slim Inception models (v1, v2, v3, v4, and Inception ResNet v2)

## Project Description

## Models

## slim-inception-v1

*Inception v1 classifier in TF-Slim*

### Operations

#### fine-tune

*Fine-tune Inception v1*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

#### train

*Train Inception v1*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

## slim-inception-v2

*Inception v2 classifier in TF-Slim*

### Operations

#### fine-tune

*Fine-tune Inception v2*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

#### train

*Train Inception v2*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

## slim-inception-v3

*Inception v3 classifier in TF-Slim*

### Operations

#### fine-tune

*Fine-tune Inception v3*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

#### train

*Train Inception v3*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

## slim-inception-v4

*Inception v4 classifier in TF-Slim*

### Operations

#### fine-tune

*Fine-tune Inception v4*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

#### train

*Train Inception v4*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

## slim-inception-resnet-v2

*Inception ResNet v2 classifier in TF-Slim*

### Operations

#### fine-tune

*Fine-tune Inception ResNet v2*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

#### train

*Train Inception ResNet v2*

##### Flags

**batch-size***Number of samples in each batch (default is 32)***dataset***Dataset to train with (cifar10, mnist, flowers)***learning-rate***Initial learning rate (default is 0.01)***learning-rate-decay-type***How the learning rate is decayed (default is ‘exponential’)***log-every-n-steps***Steps between status updates (default is 10)***max-steps***Maximum number of training steps (default is 1000)***optimizer***Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)***save-model-secs***Seconds between model saves (default is 60)***save-summaries-secs***Seconds between summary saves (default is 60)***weight-decay***Weight decay on the model weights (default is 4e-05)*

## References

Modelfile: https://github.com/guildai/index/tree/master/slim/inception/MODELS

## Release History

## Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

File Name & Hash SHA256 Hash Help | Version | File Type | Upload Date |
---|---|---|---|

gpkg.slim.inception-0.1.0.dev5-py2.py3-none-any.whl
(8.5 kB) Copy SHA256 Hash SHA256 |
py2.py3 | Wheel | Nov 27, 2017 |