Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (pypi.python.org).
Help us improve Python packaging - Donate today!

TF-Slim Inception models (v1, v2, v3, v4, and Inception ResNet v2)

Project Description

Models

slim-inception-v1

Inception v1 classifier in TF-Slim

Operations

fine-tune

Fine-tune Inception v1

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

train

Train Inception v1

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

slim-inception-v2

Inception v2 classifier in TF-Slim

Operations

fine-tune

Fine-tune Inception v2

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

train

Train Inception v2

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

slim-inception-v3

Inception v3 classifier in TF-Slim

Operations

fine-tune

Fine-tune Inception v3

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

train

Train Inception v3

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

slim-inception-v4

Inception v4 classifier in TF-Slim

Operations

fine-tune

Fine-tune Inception v4

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

train

Train Inception v4

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

slim-inception-resnet-v2

Inception ResNet v2 classifier in TF-Slim

Operations

fine-tune

Fine-tune Inception ResNet v2

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

train

Train Inception ResNet v2

Flags
batch-size
Number of samples in each batch (default is 32)
dataset
Dataset to train with (cifar10, mnist, flowers)
learning-rate
Initial learning rate (default is 0.01)
learning-rate-decay-type
How the learning rate is decayed (default is ‘exponential’)
log-every-n-steps
Steps between status updates (default is 10)
max-steps
Maximum number of training steps (default is 1000)
optimizer
Training optimizer (adadelta, adagrad, adam, ftrl, momentum, sgd, rmsprop) (default is ‘rmsprop’)
save-model-secs
Seconds between model saves (default is 60)
save-summaries-secs
Seconds between summary saves (default is 60)
weight-decay
Weight decay on the model weights (default is 4e-05)

Release History

This version
History Node

0.1.0.dev5

History Node

0.1.0.dev4

History Node

0.1.0.dev3

History Node

0.1.0.dev2

History Node

0.1.0.dev1

Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

File Name & Hash SHA256 Hash Help Version File Type Upload Date
gpkg.slim.inception-0.1.0.dev5-py2.py3-none-any.whl
(8.5 kB) Copy SHA256 Hash SHA256
py2.py3 Wheel Nov 27, 2017

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting