Scripts for the Cityscapes Dataset
The Cityscapes Dataset
This repository contains scripts for inspection, preparation, and evaluation of the Cityscapes dataset. This large-scale dataset contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames.
Details and download are available at: www.cityscapes-dataset.net
The folder structure of the Cityscapes dataset is as follows:
The meaning of the individual elements is:
rootthe root folder of the Cityscapes dataset. Many of our scripts check if an environment variable
CITYSCAPES_DATASETpointing to this folder exists and use this as the default choice.
typethe type/modality of data, e.g.
gtFinefor fine ground truth, or
leftImg8bitfor left 8-bit images.
splitthe split, i.e. train/val/test/train_extra/demoVideo. Note that not all kinds of data exist for all splits. Thus, do not be surprised to occasionally find empty folders.
citythe city in which this part of the dataset was recorded.
seqthe sequence number using 6 digits.
framethe frame number using 6 digits. Note that in some cities very few, albeit very long sequences were recorded, while in some cities many short sequences were recorded, of which only the 19th frame is annotated.
extthe extension of the file and optionally a suffix, e.g.
_polygons.jsonfor ground truth files
Possible values of
gtFinethe fine annotations, 2975 training, 500 validation, and 1525 testing. This type of annotations is used for validation, testing, and optionally for training. Annotations are encoded using
jsonfiles containing the individual polygons. Additionally, we provide
pngimages, where pixel values encode labels. Please refer to
helpers/labels.pyand the scripts in
gtCoarsethe coarse annotations, available for all training and validation images and for another set of 19998 training images (
train_extra). These annotations can be used for training, either together with gtFine or alone in a weakly supervised setup.
gtBbox3d3D bounding box annotations of vehicles. Please refer to Cityscapes 3D (Gählert et al., CVPRW '20) for details.
gtBboxCityPersonspedestrian bounding box annotations, available for all training and validation images. Please refer to
helpers/labels_cityPersons.pyas well as CityPersons (Zhang et al., CVPR '17) for more details. The four values of a bounding box are (x, y, w, h), where (x, y) is its top-left corner and (w, h) its width and height.
leftImg8bitthe left images in 8-bit LDR format. These are the standard annotated images.
leftImg8bit_blurredthe left images in 8-bit LDR format with faces and license plates blurred. Please compute results on the original images but use the blurred ones for visualization. We thank Mapillary for blurring the images.
leftImg16bitthe left images in 16-bit HDR format. These images offer 16 bits per pixel of color depth and contain more information, especially in very dark or bright parts of the scene. Warning: The images are stored as 16-bit pngs, which is non-standard and not supported by all libraries.
rightImg8bitthe right stereo views in 8-bit LDR format.
rightImg16bitthe right stereo views in 16-bit HDR format.
timestampthe time of recording in ns. The first frame of each sequence always has a timestamp of 0.
disparityprecomputed disparity depth maps. To obtain the disparity values, compute for each pixel p with p > 0: d = ( float(p) - 1. ) / 256., while a value p = 0 is an invalid measurement. Warning: the images are stored as 16-bit pngs, which is non-standard and not supported by all libraries.
camerainternal and external camera calibration. For details, please refer to csCalibration.pdf
vehiclevehicle odometry, GPS coordinates, and outside temperature. For details, please refer to csCalibration.pdf
More types might be added over time and also not all types are initially available. Please let us know if you need any other meta-data to run your approach.
Possible values of
trainusually used for training, contains 2975 images with fine and coarse annotations
valshould be used for validation of hyper-parameters, contains 500 image with fine and coarse annotations. Can also be used for training.
testused for testing on our evaluation server. The annotations are not public, but we include annotations of ego-vehicle and rectification border for convenience.
train_extracan be optionally used for training, contains 19998 images with coarse annotations
demoVideovideo sequences that could be used for qualitative evaluation, no annotations are available for these videos
python -m pip install cityscapesscripts
Graphical tools (viewer and label tool) are based on Qt5 and can be installed via
python -m pip install cityscapesscripts[gui]
The installation installs the cityscapes scripts as a python module named
cityscapesscripts and exposes the following tools
csDownload: Download the cityscapes packages via command line.
csViewer: View the images and overlay the annotations.
csLabelTool: Tool that we used for labeling.
csEvalPixelLevelSemanticLabeling: Evaluate pixel-level semantic labeling results on the validation set. This tool is also used to evaluate the results on the test set.
csEvalInstanceLevelSemanticLabeling: Evaluate instance-level semantic labeling results on the validation set. This tool is also used to evaluate the results on the test set.
csEvalPanopticSemanticLabeling: Evaluate panoptic segmentation results on the validation set. This tool is also used to evaluate the results on the test set.
csEvalObjectDetection3d: Evaluate 3D object detection on the validation set. This tool is also used to evaluate the results on the test set.
csCreateTrainIdLabelImgs: Convert annotations in polygonal format to png images with label IDs, where pixels encode "train IDs" that you can define in
csCreateTrainIdInstanceImgs: Convert annotations in polygonal format to png images with instance IDs, where pixels encode instance IDs composed of "train IDs".
csCreatePanopticImgs: Convert annotations in standard png format to COCO panoptic segmentation format.
The package is structured as follows
helpers: helper files that are included by other scripts
viewer: view the images and the annotations
preparation: convert the ground truth annotations into a format suitable for your approach
evaluation: validate your approach
annotation: the annotation tool used for labeling the dataset
download: downloader for Cityscapes packages
Note that all files have a small documentation at the top. Most important files
helpers/labels.py: central file defining the IDs of all semantic classes and providing mapping between various class properties.
helpers/labels_cityPersons.py: file defining the IDs of all CityPersons pedestrian classes and providing mapping between various class properties.
CYTHONIZE_EVAL= python setup.py build_ext --inplaceto enable cython plugin for faster evaluation. Only tested for Ubuntu.
Once you want to test your method on the test set, please run your approach on the provided test images and submit your results:
For semantic labeling, we require the result format to match the format of our label images named
Thus, your code should produce images where each pixel's value corresponds to a class ID as defined in
Note that our evaluation scripts are included in the scripts folder and can be used to test your approach on the validation set.
For further details regarding the submission process, please consult our website.
Please feel free to contact us with any questions, suggestions or comments:
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size cityscapesScripts-2.0.1-py2-none-any.whl (471.5 kB)||File type Wheel||Python version py2||Upload date||Hashes View|
|Filename, size cityscapesScripts-2.0.1-py3-none-any.whl (471.5 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size cityscapesScripts-2.0.1.tar.gz (447.8 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for cityscapesScripts-2.0.1-py2-none-any.whl
Hashes for cityscapesScripts-2.0.1-py3-none-any.whl