aiMotive paper to be presented at ICLR 2023 Workshop SR4AD

News & insights

Check our latest stories on automated driving

Self-driving vehicles sensor visualization

Written by Tamás Matuszka / Posted at 4/21/23

aiMotive paper to be presented at ICLR 2023 Workshop SR4AD

The paper titled ‘aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception’ by Tamás Matuszka et al. has been accepted to ICLR 2023 – the eleventh International Conference on Learning Representations. The workshop ‘Scene Representations for Autonomous Driving’ will cover the real-world impact of ML research on self-driving technology. The work of aiMotive researchers will be presented in Kigali, Rwanda, on May 5, 2023. The paper presents the publicly available aiMotive Multimodal Dataset and describes several 3D object detection baseline models trained on it.

Autonomous driving is a popular research area within the computer vision research community, using numerous available datasets. While several public multimodal datasets are accessible, they mainly comprise two sensor modalities (camera, LiDAR) that are not well-suited to, for example, adverse weather. In addition, they lack far-range annotations, making it harder to train neural networks that form the basis of highway assistant functions in autonomous vehicles. These deficits motivated us to release a multimodal dataset for robust autonomous driving with long-range perception. The dataset consists of scenes with synchronized and calibrated LiDAR, camera, and radar sensors covering a 360-degree field of view. The data was collected on highways, in urban and suburban areas, by day and night, and also in the rain, and is annotated with 3D bounding boxes with consistent identifiers across frames. 

We used two methods for generating ground truth labels. The first was an automatic annotation method for creating training data using the dynamic auto-annotation tool, aiNotate. Since a good dataset must ensure high-quality annotations and avoid possible systematic bias resulting from automatic annotation, the resulting data sequences were manually quality-checked using multiple criteria. The second method utilized manual annotation for creating validation data. 

One of the main contributions of our work is the number and range of annotated objects that are significant distances away. The most popular dataset does not have annotations beyond 80 meters, while our work provides ground truth at up to 200 meters. The ability to detect distant objects is essential for developing advanced driver assistance systems (ADAS) operating on highways, but current datasets cannot be used for this purpose. By releasing our dataset and models to the public, we seek to facilitate research in multimodal sensor fusion and robust long-range perception systems.

For further details regarding the dataset, baseline models, implementation details, and experimental results, please check the following URL: https://openreview.net/forum?id=LW3bRLlY-SA