Written by Péter Kozma / Posted at 8/16/18
Why Sensor Fusion is Critical for Self-Driving Cars
Autonomous systems monitor their surroundings with several sensors: cameras, radars, ultrasonic sensors and optionally LiDARs. Why not one sensor? As Gergely Debreczeni explained in this blog, redundancy is the foundation of safe autonomy. Let’s dive deeper into the technical aspects of the question.
A redundant sensor system is a fusion of properly calibrated and synchronized different types of sensors. This is the only way of achieving safe and stable longitudinal and lateral control. There are three factors at the core of sensor fusion: spatial calibration, temporal synchronization, and the proper fusion of data.
First, all the sensors mentioned above must work in the same, shared coordinate system. This is created through the process of spatial calibration. Precise calibration is vital, as erroneous calibration could lead to unwanted situations. For example, if the orientation of cameras is not known exactly, the system will miscalculate the trajectory it has to follow and could hit obstacles that it recognizes and plans to avoid.
A common solution is in-factory calibration. However, this is a limiting factor, as the car would have to return to calibration base stations time and time again to ensure safe operation. At AImotive we believe that only regularly repeated, even on the fly, automated calibration can provide the required level of safety for autonomous vehicles.
But spatial calibration is not enough, leading to the second factor. Synchronizing sensors in time is also vital, and the sensors have to be in sync not only with each other but also with the processing unit. Imagine the problems that could be caused by sensor data from different moments, which contradict each other being connected by the system for use in decision making. In many cases simply not using a certain dataset is less dangerous than relying on desynchronized data. aiDrive minimizes the risk of desynchronization. Based on the system clock the processing unit sends periodical trigger pings requesting data from the sensors. The responses are time coded, ensuring the system has a clear understanding of what data is relevant to which moment in time.
Third, and finally: sensor fusion. This should be seen as reconstructing the world through the different sensory inputs. However, the reconstruction has to be safe to plan with, even if one of the sensors on the vehicle (a camera, radar or LiDAR) fails. It is also important to understand the strengths and weaknesses of different sensors to know which sensor should be believed in certain situations. An inaccurate measurement can lead to anomalies and the risk of highly erratic behavior. There are several well-known approaches to these difficulties. For example, Kalman filters, through which the system can predict expected measurements based on previous data and an underlying proper system model. Also, as the use of neural networks in autonomous technology becomes increasingly widespread there are possibilities to be explored in this regard as well.
All three actors: spatial calibration, temporal synchronization and sensor fusion are vital for safe self-driving. At AImotive we always strive to find the most advanced and safest solutions to the myriad questions of autonomous technology. Our goal is not only to solve self-driving, but to ensure that autonomous technology is scalable, and most importantly, safe.