Zenseact Open Dataset
Project description
Zenseact Open Dataset
The Zenseact Open Dataset (ZOD) is a large multi-modal autonomous driving dataset developed by a team of researchers at Zenseact. The dataset is split into three categories: Frames, Sequences, and Drives. For more information about the dataset, please refer to our coming soon, or visit our website.
Examples
Find examples of how to use the dataset in the examples folder. Here you will find a set of jupyter notebooks that demonstrate how to use the dataset, as well as an example of how to train an object detection model using Detectron2.
Installation
The install the library with minimal dependencies, for instance to be used in a training environment without need for interactivity och visualization, run:
pip install zod
To install the library along with the CLI, which can be used to download the dataset, convert between formats, and perform visualization, run:
pip install "zod[cli]"
To install the full devkit, with the CLI and all dependencies, run:
pip install "zod[all]"
Download using the CLI
This is an example of how to download the ZOD Frames mini-dataset using the CLI. Prerequisites are that you have applied for access and received a download link. The simplest way to download the dataset is to use the CLI interactively:
zod download
This will prompt you for the required information, present you with a summary of the download, and then ask for confirmation. You can of course also specify all the required information directly on the command line, and avoid the confirmation using --no-confirm
or -y
. For example:
zod download -y --url="<download-link>" --output-dir=<path/to/outputdir> --subset=frames --version=mini
By default, all data streams are downloaded for ZodSequences and ZodDrives. For ZodFrames, DNAT versions of the images, and surrounding (non-keyframe) lidar scans are excluded. To download them as well, run:
zod download -y --url="<download-link>" --output-dir=<path/to/outputdir> --subset=frames --version=full --num-scans-before=-1 --num-scans-after=-1 --dnat
If you want to exclude some of the data streams, you can do so by specifying the --no-<stream>
flag. For example, to download only the DNAT images, infos, and annotations, run:
zod download --dnat --no-blur --no-lidar --no-oxts --no-vehicle-data
Finally, for a full list of options you can of course run:
zod download --help
Anonymization
To preserve privacy, the dataset is anonymized. The anonymization is performed by brighterAI, and we provide two separate modes of anonymization: deep fakes (DNAT) and blur. In our paper, we show that the performance of an object detector is not affected by the anonymization method. For more details regarding this experiment, please refer to our coming soon.
Citation
If you publish work that uses Zenseact Open Dataset, please cite: coming soon
@misc{zod2021,
author = {TODO},
title = {Zenseact Open Dataset},
year = {2023},
publisher = {TODO},
journal = {TODO},
Contact
For questions about the dataset, please Contact Us.
Contributing
We welcome contributions to the development kit. If you would like to contribute, please open a pull request.
License
Dataset: This dataset is the property of Zenseact AB (© 2023 Zenseact AB) and is licensed under CC BY-SA 4.0. Any public use, distribution, or display of this dataset must contain this notice in full:
For this dataset, Zenseact AB has taken all reasonable measures to remove all personally identifiable information, including faces and license plates. To the extent that you like to request the removal of specific images from the dataset, please contact privacy@zenseact.com.
Development kit: This development kit is the property of Zenseact AB (© 2023 Zenseact AB) and is licensed under MIT.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.