Skip to main content

RefineNet semantic image segmentation

Project description

~Please note this is only a beta release at this stage~

RefineNet: high-res semantic image segmentation

Best of ACRV Repository Primary language PyPI package Conda Version Conda Recipe Conda Platforms License

RefineNet is a generic multi-path refinement network for high-resolution semantic image segmentation and general dense prediction tasks on images. It achieves high-resolution prediction by explicitly exploiting all the information available along the down-sampling process and using long-range residual connections.

RefineNet sample image on PASCAL VOC dataset

This repository contains an open-source implementation of RefineNet in Python, with both the official and lightweight network models from our publications. The package provides PyTorch implementations for using training, evaluation, and prediction in your own systems. The package is easily installable with conda, and can also be installed via pip if you'd prefer to manually handle dependencies.

Our code is free to use, and licensed under BSD-3. We simply ask that you cite our work if you use RefineNet in your own research.

@youtube RefineNet Results on the CityScapes dataset

Related resources

This repository brings the work from a number of sources together. Please see the links below for further details:

Installing RefineNet

We offer three methods for installing RefineNet:

  1. Through our Conda package: single command installs everything including system dependencies (recommended)
  2. Through our pip package: single command installs RefineNet and Python dependences, you take care of system dependencies
  3. Directly from source: allows easy editing and extension of our code, but you take care of building and all dependencies

Conda

The only requirement is that you have Conda installed on your system, and NVIDIA drivers installed if you want CUDA acceleration. We provide Conda packages through Conda Forge, which recommends adding their channel globally with strict priority:

conda config --add channels conda-forge
conda config --set channel_priority strict

Once you have access to the conda-forge channel, RefineNet is installed by running the following from inside a Conda environment:

u@pc:~$ conda install refinenet

We don't explicitly lock the PyTorch installation to a CUDA-enabled version to maximise compatibility with our users' possible setups. If you wish to ensure a CUDA-enabled PyTorch is installed, please use the following installation line instead:

u@pc:~$ conda install pytorch=*=*cuda* refinenet

You can see a list of our Conda dependencies in the RefineNet feedstock's recipe.

Pip

Before installing via pip, you must have the following system dependencies installed if you want CUDA acceleration:

  • NVIDIA drivers
  • CUDA

Then RefineNet, and all its Python dependencies can be installed via:

u@pc:~$ pip install refinenet

From source

Installing from source is very similar to the pip method above due to RefineNet only containing Python code. Simply clone the repository, enter the directory, and install via pip:

u@pc:~$ pip install -e .

Note: the editable mode flag (-e) is optional, but allows you to immediately use any changes you make to the code in your local Python ecosystem.

We also include scripts in the ./scripts directory to support running RefineNet without any pip installation, but this workflow means you need to handle all system and Python dependencies manually.

Using RefineNet

RefineNet can be used either entirely from the command line, or through its Python API. Both call the same underlying implementation, and as such offer equivalent functionality. We provide both options to facilitate use across a wide range of applications. See below for details of each method.

RefineNet from the command line

When installed, either via pip or conda, a refinenet executable is made available on your system PATH (the scripts in the ./scripts directory can be used as an alternative if not installing via a package manager).

The refinenet executable provides access to all functionality, including training, evaluation, and prediction. See the --help flags for details on what the command line utility can do, and how it can be configured:

u@pc:~$ refinenet --help
u@pc:~$ refinenet train --help
u@pc:~$ refinenet evaluate --help
u@pc:~$ refinenet predict --help

RefineNet Python API

RefineNet can also be used like any other Python package through its API. The API consists of a RefineNet class with three main functions for training, evaluation, and prediction. Below are some examples to help get you started with RefineNet:

from refinenet import RefineNet

# Initialise a full RefineNet network with no pre-trained model
r = RefineNet()

# Initialise a standard RefineNet network with a model pre-trained on NYU
r = RefineNet(model_type='full', load_pretrained='nyu')

# Initialise a lightweight RefineNet network with 40 classes
r = RefineNet(model='lightweight', num_classes=40)

# Load a previous snapshot from a 152 layer network
r = RefineNet(load_snapshot='/path/to/snapshot', num_resnet_layers=152)

# Train a new model on the NYU dataset with a custom learning rate
r.train('nyu', learning_rate=0.0005)

# Train a model with the adam optimiser & 8 workers, saving output to ~/output
r.train('voc', optimiser_type='adam', num_workers=8,
        output_directory='~/output')

# Get a predicted segmentation as a NumPy image, given an input NumPy image
segmentation_image = r.predict(image=my_image)

# Save a segmentation image to file, given an image from another image file
r.predict(image_file='/my/prediction.jpg',
          output_file='/my/segmentation/image.jpg')

# Evaluate your model's performance on the voc dataset, & save the results with
# images
r.evaluate('voc', output_directory='/my/results.json', output_images=True)

Citing our work

If using RefineNet in your work, please cite our original CVPR paper:

@InProceedings{Lin_2017_CVPR,
author = {Lin, Guosheng and Milan, Anton and Shen, Chunhua and Reid, Ian},
title = {RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}

Please also cite our BMVC paper on Light-Weight RefineNet if using the lightweight models:

@article{nekrasov2018light,
  title={Light-weight refinenet for real-time semantic segmentation},
  author={Nekrasov, Vladimir and Shen, Chunhua and Reid, Ian},
  journal={arXiv preprint arXiv:1810.03272},
  year={2018}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

refinenet-0.9.5.tar.gz (26.3 kB view details)

Uploaded Source

Built Distribution

refinenet-0.9.5-py3-none-any.whl (32.4 kB view details)

Uploaded Python 3

File details

Details for the file refinenet-0.9.5.tar.gz.

File metadata

  • Download URL: refinenet-0.9.5.tar.gz
  • Upload date:
  • Size: 26.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.4.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.9.5

File hashes

Hashes for refinenet-0.9.5.tar.gz
Algorithm Hash digest
SHA256 f85e8f1e7dde6f63345b35cb710ae60fa2f54218218ac3b77ede6368949a9fb1
MD5 13124ab052ef9d0eda7b8b7501c2a250
BLAKE2b-256 689d27bc1d497ca94875375cb0ec9656d2882df46380751308d77ba5ed02db0b

See more details on using hashes here.

File details

Details for the file refinenet-0.9.5-py3-none-any.whl.

File metadata

  • Download URL: refinenet-0.9.5-py3-none-any.whl
  • Upload date:
  • Size: 32.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.4.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.9.5

File hashes

Hashes for refinenet-0.9.5-py3-none-any.whl
Algorithm Hash digest
SHA256 458adb434c49fb56afe90687fc84f026e41a73599e2d2ee05b5f8eb43775e2a9
MD5 9e6510267e64d4f08b2a545ae952227b
BLAKE2b-256 ab03c2bc9bc093e15db3a9bc5c6e716c2212739fe49a015f5bbee7848951d5e3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page