Leading the way: aiMotive’s research contributions to global conferences

News & insights

Check our latest stories on automated driving

Simulated cityscape

Written by Tamás Matuszka / Posted at 9/24/24

Leading the way: aiMotive’s research contributions to global conferences

Research is at the heart of everything we do at aiMotive. As automated driving and AI technologies are one of the fastest-changing industries, we are constantly exploring new ideas and striving to improve our products by developing innovative solutions. This commitment to scientific inquiry is evident in our contributions to conferences, where we not only share our latest findings but also engage with the global research community. By prioritizing research, we ensure that our products and services are not just industry-leading but also grounded in the latest scientific advancements. This blog post gives a short overview of our work published at scientific conferences. 

Semi-Pseudo-Labeling 

Training neural networks require a large and diverse set of annotated data. However, obtaining high-quality, sufficiently large datasets can be expensive and sometimes impossible due to human and sensor limitations. In our ACIIDS 2022 paper (available on arXiv and SpringerLink), titled "A Novel Neural Network Training Method for Autonomous Driving Using Semi-Pseudo-Labels and 3D Data Augmentations," we address this challenge in the context of far-field 3D object detection, a problem particularly affected by the scarcity of quality training data. We developed a method called semi-pseudo-labeling, which allows us to leverage 2D bounding box annotations for training 3D object detectors. By combining semi-pseudo-labeling with 3D data augmentations, we significantly extended the detection range of the neural network beyond the original training data distribution. The CVPR 2024 publication of a similar approach by researchers from NVIDIA, UC Berkeley, Caltech, and CUHK at CVPR 2024 further underscores the ongoing relevance and difficulty of distant object detection. 


 

Multimodal dataset designed for robust autonomous driving  

Our paper, "aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception" (arXiv, OpenReview), published at ICLR 2023 SR4AD Workshop, was inspired by the challenges mentioned earlier. Several public multimodal datasets were available at the time of publication, but they primarily featured only two sensor modalities (camera and LiDAR) without radar sensor data. Additionally, these datasets lacked far-range annotations, making it challenging to train neural networks for highway assistance functions in autonomous vehicles.  

To address these gaps, we released a multimodal dataset designed for robust autonomous driving with long-range perception. It includes scenes captured with synchronized and calibrated LiDAR, camera, and radar sensors, offering a full 360-degree field of view. The data was collected on highways, in urban and suburban areas, under various conditions—day and night, as well as in the rain—and is annotated with 3D bounding boxes with consistent identifiers across frames. By making our dataset and models publicly available, we aimed to advance research in multimodal sensor fusion and robust long-range perception systems. For more details on the dataset, baseline models, implementation, and experimental results, please visit our GitHub repository

Reducing the computational load of active learning 

One common challenge in autonomous driving development is selecting the most valuable data from massive collections of recordings. Active learning, a powerful machine learning approach, helps cut labeling costs by choosing the most informative samples from an unlabeled dataset, making it a promising solution. However, traditional active learning methods can be resource-intensive, limiting scalability and efficiency. In our NeurIPS 2023 RealML Workshop paper, "Compute-Efficient Active Learning" (arXiv, OpenReview), we tackle this issue with a new method designed to reduce the computational load of active learning on large datasets.  

Our approach introduces a simple yet effective method-agnostic framework for strategically selecting and annotating data points, optimizing the process for efficiency without compromising model performance. Through case studies, we show how our method can significantly lower computational costs while maintaining or improving model outcomes. For further details, please check the corresponding NeurIPS page and GitHub repository


​​​​​

The industry hot topic, neural rendering 

Neural reconstruction is undoubtedly one of the hottest research topics in 2024, and we are actively contributing to this field. Our work, "Controllable Neural Reconstruction for Autonomous Driving," was showcased at CVPR 2024 and will also be part of the ECCV 2024 Demonstrations. Additionally, it was presented as a poster at SIGGRAPH 2024 and featured in the 'Video Generation' Technical Papers session. In this project, we developed an automated pipeline for training neural reconstruction models using sensor streams from a data collection vehicle.  

These models are then used to create a virtual replica of the real world, which can be replayed or manipulated in a controlled environment. To bring these scenes to life, our simulator adds dynamic agents to the recreated static environment, handling occlusion and lighting effects. This highly flexible simulator allows us to tweak various parameters like agent behavior and weather conditions to create diverse scenarios.  

The first large-scale 3D traffic light and sign dataset 

Our latest work, "Accurate Automatic 3D Annotation of Traffic Lights and Signs for Autonomous Driving" (arXiv), will be presented at the ECCV 2024 VCAD Workshop. We introduce the first large-scale 3D traffic light and sign dataset with far-field annotations, created entirely through our automated annotation method without any manual work. This novel method generates precise, temporally consistent 3D bounding box annotations for traffic lights and signs, effective up to 200 meters. These annotations are ideal for training real-time models in self-driving cars, which require vast amounts of training data.  

Our scalable approach uses only RGB images with 2D bounding boxes of traffic management objects, automatically obtained using an off-the-shelf image-space detector neural network and GNSS/INS data, eliminating the need for LiDAR point cloud data. The quantitative results confirm the accuracy and feasibility of our method. We hope this dataset will benefit the community and support the development of 3D traffic lights and sign detectors. For more information, visit the corresponding GitHub repository



In addition to publishing and showcasing our work at conferences, aiMotive researchers actively serve as reviewers for several events, including the NeurIPS Datasets & Benchmarks Track, the CVPR Workshop on Autonomous Driving, and the ECCV Workshop on Robust, Out-of-Distribution, and Multi-Modal Models for Autonomous Driving. By participating as reviewers, we believe we can further contribute to the research community and support these technological pioneering events. 

Links: 

ACIIDS 2022 

Title: A Novel Neural Network Training Method for Autonomous Driving Using Semi-Pseudo-Labels and 3D Data Augmentations 

Paper: https://arxiv.org/abs/2207.09869 / https://link.springer.com/chapter/10.1007/978-3-031-21967-2_18 

ICLR 2023 SR4AD Workshop

https://opendrivelab.com/sr4ad/iclr23 

Title: aiMotive Dataset: A Multimodal Dataset for Robust Autonomous Driving with Long-Range Perception 

Paper: https://arxiv.org/abs/2211.09445 / https://openreview.net/forum?id=LW3bRLlY-SA 

GitHub:https://github.com/aimotive/aimotive_dataset 

NeurIPS 2023 RealML Workshop

https://realworldml.github.io/neurips2023/ 

Title: Compute-Efficient Active Learning 

Paper:https://arxiv.org/abs/2401.07639 / https://neurips.cc/media/neurips-2023/Slides/78767_7FrbVd0.pdf 

GitHub: https://github.com/aimotive/Compute-Efficient-Active-Learning 

CVPR 2024 - Demo

https://cvpr.thecvf.com/virtual/2024/demonstration/32145 

Title: Controllable Neural Reconstruction for Autonomous Driving 

https://aimotive.com/cvpr2024 

SIGGRAPH 2024 - Poster 

Title: Controllable Neural Reconstruction for Autonomous Driving 

Paper: https://dl.acm.org/doi/10.1145/3641234.3671082 

ECCV 2024 - Demo 

Title: Controllable Neural Reconstruction for Autonomous Driving 

Page: https://eccv.ecva.net/virtual/2024/demonstration/2785 

ECCV 2024 VCAD Workshop

https://vcad-workshop.github.io/ 

Title: Accurate Automatic 3D Annotation of Traffic Lights and Signs for Autonomous Driving 

Paper: https://arxiv.org/pdf/2409.12620  

GitHub:https://github.com/aimotive/aimotive_tl_ts_dataset