Skip to main content

An opensource pytorch framework for autonomous driving cooperative detection

Project description

OpenCOODX

Documentation Status License: MIT

Overview

OpenCOOD is an Open COOperative Detection framework for autonomous driving. It is also the official implementation of the ICRA 2022 [Website] [Paper: OPV2V] [Documents] [OpenCOOD]

opencoodx is a ready-to-go package of OpenCOOD. You can install it easily by using pip.

Installation

pip install opencoodx

# to upgrade 
pip install --upgrade opencoodx

Features

  • Provide easy data API for the Vehicle-to-Vehicle (V2V) multi-modal perception dataset OPV2V

    It currently provides easy API to load LiDAR data from multiple agents simultaneously in a structured format and convert to PyTorch Tesnor directly for model use.

  • Provide multiple SOTA 3D detection backbone

    It supports state-of-the-art LiDAR detector including PointPillar, Pixor, VoxelNet, and SECOND.

  • Support most common fusion strategies

    It includes 3 most common fusion strategies: early fusion, late fusion, and intermediate fusion across different agents.

  • Support several SOTA multi-agent visual fusion model

    It supports the most recent multi-agent perception algorithms (currently up to Sep. 2021) including Attentive Fusion, Cooper (early fusion), F-Cooper, V2VNet etc. We will keep updating the newest algorithms.

  • Provide a convenient log replay toolbox for OPV2V dataset (coming soon)

    It also provides an easy tool to replay the original OPV2V dataset. More importantly, it allows users to enrich the original dataset by attaching new sensors or define additional tasks (e.g. tracking, prediction) without changing the events in the initial dataset (e.g. positions and number of all vehicles, traffic speed).

Prerequisite - Dependency

1. Pytorch Installation (>=1.10)

Go to https://pytorch.org/ to install pytorch cuda version. Pytorch 1.11 version is recommended.

2. Spconv (2.x)

Install spconv 2.x based on your cuda version. For more details, please check: https://pypi.org/project/spconv/

3. Bbx IOU cuda version compile

Install bbx nms calculation cuda version using following command:

opencoodx --bbx

Prerequisite - Data files

1. Download trained model files

To download these models, you can run the following command in your terminal:

# download all models 
opencoodx --model all
# download one model
opencoodx --model ${model_name}

Arguments Explanation:

  • all: To download all models

  • model_name: We have 11 trained models that are ready to use, you can choose from the following:

    • pointpillar_attentive_fusion

    • pointpillar_early_fusion

    • pointpillar_fcooper

    • pointpillar_late_fusion

    • v2vnet

    • voxelnet_early_fusion

    • voxelnet_attentive_fusion

    • second_early_fusion

    • second_attentive_fusion

    • second_late_fusion

    • pixor_early_fusion

2. Offline data download (optional)

To download offline dataset, you can simply use the command:

opencoodx --data ${dataset_name}

Arguments Explanation:

  • dataset_name: str type. There are 4 different dataset_name. You can choose from 'test_culver_city', 'test', 'validate' or 'train'

Quick Start

Data sequence visualization

To quickly visualize the LiDAR stream in the OPV2V dataset, you need to download offline data to your current working directory and the run the following command:

opencoodx --vis_data ${dataset_name} --vis_color ${color_mode}

Arguments Explanation:

  • dataset_name: str type, including dataset you've download. You can choose from 'test_culver_city', 'test', 'validate' or 'train'

  • color_mode : str type, indicating the lidar color rendering mode. You can choose from 'constant', 'intensity' or 'z-value'.

Benchmark and model zoo

Results on OPV2V dataset (AP@0.7 for no-compression/ compression)

Backbone Fusion Strategy Bandwidth (Megabit),
before/after compression
Default Towns Culver City Download
Naive Late PointPillar Late 0.024/0.024 0.781/0.781 0.668/0.668 url
Cooper PointPillar Early 7.68/7.68 0.800/x 0.696/x url
Attentive Fusion PointPillar Intermediate 126.8/1.98 0.815/0.810 0.735/0.731 url
F-Cooper PointPillar Intermediate 72.08/1.12 0.790/0.788 0.728/0.726 url
V2VNet PointPillar Intermediate 72.08/1.12 0.822/0.814 0.734/0.729 url
Naive Late VoxelNet Late 0.024/0.024 0.738/0.738 0.588/0.588 url
Cooper VoxelNet Early 7.68/7.68 0.758/x 0.677/x url
Attentive Fusion VoxelNet Intermediate 576.71/1.12 0.864/0.852 0.775/0.746 url
Naive Late SECOND Late 0.024/0.024 0.775/0.775 0.682/0.682 url
Cooper SECOND Early 7.68/7.68 0.813/x 0.738/x url
Attentive SECOND Intermediate 63.4/0.99 0.826/0.783 0.760/0.760 url
Naive Late PIXOR Late 0.024/0.024 0.578/0.578 0.360/0.360 url
Cooper PIXOR Early 7.68/7.68 0.678/x 0.558/x url
Attentive PIXOR Intermediate 313.75/1.22 0.687/0.612 0.546/0.492 url

Note:

  • We suggest using PointPillar as the backbone when you are creating your method and try to compare with our benchmark, as we implement most of the SOTA methods with this backbone only.
  • We assume the transimssion rate is 27Mbp/s. Considering the frequency of LiDAR is 10Hz, the bandwidth requirement should be less than 2.7Mbp to avoid severe delay.
  • A 'x' in the benchmark table represents the bandwidth requirement is too large, which can not be considered to employ in practice.

Tutorials

We have a series of tutorials to help you understand OpenCOOD more. Please check the series of our tutorials.

Citation

If you are using our OpenCOOD framework or OPV2V dataset for your research, please cite the following paper:

@inproceedings{xu2022opencood,
 author = {Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Li, Jiaqi Ma},
 title = {OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication},
 booktitle = {2022 IEEE International Conference on Robotics and Automation (ICRA)},
 year = {2022}}

Also, under this LICENSE, OpenCOOD is for non-commercial research only. Researchers can modify the source code for their own research only. Contracted work that generates corporate revenues and other general commercial use are prohibited under this LICENSE. See the LICENSE file for details and possible opportunities for commercial use.

Future Plans

  • Provide camera APIs for OPV2V
  • Provide the log replay toolbox
  • Implement F-Cooper
  • Implement V2VNet
  • Implement DiscoNet

Contributors

OpenCOOD is supported by the UCLA Mobility Lab. We also appreciate the great work from OpenPCDet, as part of our works use their framework.

Lab Principal Investigator:

Project Lead:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

opencoodx-0.1.19.tar.gz (5.8 MB view details)

Uploaded Source

Built Distribution

opencoodx-0.1.19-py3-none-any.whl (143.4 kB view details)

Uploaded Python 3

File details

Details for the file opencoodx-0.1.19.tar.gz.

File metadata

  • Download URL: opencoodx-0.1.19.tar.gz
  • Upload date:
  • Size: 5.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.0

File hashes

Hashes for opencoodx-0.1.19.tar.gz
Algorithm Hash digest
SHA256 6672bd3fa1147fc2c01f634c130cf02890239505c1a9554a270e1020344fe1e8
MD5 b3b8578a1a50f3ebf22ae06ad0b870e9
BLAKE2b-256 8f6d7d2998f56faee28e7a41997a26d49334f6386b4ee8fa84104d2add68dea5

See more details on using hashes here.

File details

Details for the file opencoodx-0.1.19-py3-none-any.whl.

File metadata

  • Download URL: opencoodx-0.1.19-py3-none-any.whl
  • Upload date:
  • Size: 143.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.0

File hashes

Hashes for opencoodx-0.1.19-py3-none-any.whl
Algorithm Hash digest
SHA256 4650d2656c884ca251b1bb04ea3f6453814948ec47b92a82a2f4c8fe7f269e6a
MD5 62ad2fc8c0ea346e9a614954664b6329
BLAKE2b-256 cf8be2fb131371582a62d2e817f33d84b2a91b1689ca217a60dbc84e83fc2750

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page