Skip to main content

Implementation of various semantic segmentation models in tensorflow & keras including popular datasets

Project description


sudo apt-get install libsm6 libxext6 libxrender-dev


Using on the the inbuild datasets (generator)

python -m python_keras_semantic_segmentation.bin.train -ds 'tacobinary' -bs 8 -e 100 \
    -logdir 'logs/taco-binary-test' -o 'ranger' -lr 5e-3 --size 256,256 \
    -l 'binary_crossentropy' -fa 'sigmoid' \

Using a fixed record path

python -m tf_semantic_segmentation.bin.train --record_dir=/hdd/datasets/cityscapes/records/cityscapes-512x256-rgb/ \
    -bs 4 -e 100 -logdir 'logs/cityscapes-bs8-e100-512x256' -o 'ranger' -lr 1e-4 -l 'categorical_crossentropy' \
    -fa 'softmax' -bufsize 50 --metrics='iou_score,f1_score' -m 'erfnet' --gpus='0' -a 'mish'


  • Erfnet
  • Unet
from tf_semantic_segmentation import models

# print all available models

# returns a model without the final activation function
# because the activation function depends on the loss function
model = models.get_model_by_name('erfnet')


  • Ade20k
  • Camvid
  • Cityscapes
  • MappingChallenge
  • MotsChallenge
  • Coco
  • PascalVoc2012
  • Taco
  • Shapes (randomly creating triangles, rectangles and circles)
  • Toy (Overlaying TinyImageNet with MNIST)
from tf_semantic_sementation.datasets import get_dataset by name, datasets_by_name, DataType, get_cache_dir

# print availiable dataset names

# get the binary (waste or not) dataset
data_dir = '/hdd/data/'
name = 'tacobinary'
cache_dir = get_cache_dir(data_dir, name.lower())
ds = get_dataset_by_name(name, cache_dir)

# print labels and classes

# print number of training examples

# or simply print the summary


This library simplicifies the process of creating a tfrecord dataset for faster training.

Write tfrecords:

from tf_semantic_segmentation.datasets import TFWriter
ds = ...
writer = TFWriter(record_dir)

or use simple with this script (will be save with size 128 x 128 (width x height)):

tf-semantic-segmentation-tfrecord-writer -d 'toy' -c /hdd/datasets/ -s '128,128'

Prediction UI

# install
echo "deb [arch=amd64] stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl | sudo apt-key add -
sudo apt-get update && apt-get install tensorflow-model-server

# start
tensorflow_model_server --rest_api_port=8501 --model_base_path=/home/baudcode/Code/python-keras-semantic-segmentation/logs/taco_binary_erfnet_256x256_bs_8_rgb_ranger_lr_5e-3-e100-ce_label_smoothing/saved_model/

# start
pip install streamlit
python install && streamlit run tf_semantic_segmentation/eval/

Project details

Release history Release notifications

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for tf-semantic-segmentation, version 0.1.0
Filename, size File type Python version Upload date Hashes
Filename, size tf_semantic_segmentation-0.1.0-py3-none-any.whl (73.7 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size tf_semantic_segmentation-0.1.0.tar.gz (50.1 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page