Sensor fusion paves the way to safer self-driving

News & insights

Check our latest stories on automated driving

Written by Péter Kozma / Posted at 2/15/19

Sensor fusion paves the way to safer self-driving

An estimated 1.3 million people die in road accidents every year. This statistic is repeated almost daily by the automated driving industry, and indeed we ourselves have stressed it before. The goal of automated driving is to save lives: first to make it harder for human drivers to cause accidents, then by taking over from them. One of the most vital systems in achieving this goal is the sensor setup of the vehicle. To maximize safety and efficiency the sensor setup must be planned to utilize the strengths of each sensor to the maximum, while also taking steps to counter their deficiencies.

So, how can these deficiencies be countered? Adding more of the same sensors to the setup is not going to be the best solutions. However, using several different sensor types in unison, and utilizing sensor fusion leads to a drastic increase in safety. If different sensor types are chosen to create a complementary and redundant sensor setup, all deficiencies can be countered effectively.

Proving this point theoretically is very easy, however, in practice, things are slightly more complex. What sensors are the most mature and cost-effective sensors to use in self-driving and fusion? Three sensors are most commonly associated with automated driving: cameras, LiDARs, and radars.

Digital cameras are an obvious choice. Not only is their modus operandi the closest to that of human drivers, i.e. vision, but they are also trialed and cost-effective sensors that have been in use for about fifty years. However, it is also well-known that a large number of accidents are caused by errors in human judgment, made based on visual information. Errors in human perception can also be caused by a wide range of environmental factors; weather conditions, darkness or, inversely, blinding sunlight. As cameras and eyes work in very similar ways, camera-based perception systems are also affected by these conditions.

Radars are also well-known and even older technology than digital cameras. The signals emitted by radars are almost unaffected by weather and lighting conditions. The detection quality of a radar is barely altered by rain, snow or fog. However, in itself, radar is not enough. The boundaries of objects, for example, would be detected at a much too low resolution. Not to mention that the complete transportation ecosystem built around visual cues – lane markings, traffic lights, signs – is invisible to radio waves.

Younger than the above but almost unavoidable in automated driving are LiDARs. Similarly to radars, LiDARs can be used to measure distances with extreme precision, and are unaffected by darkness, since they have their own light source. However, on closer examination, it soon becomes apparent that LiDARs are drastically affected by rain, snow, and especially fog. Also, the information density of LiDARs is much lower than that of cameras, and they are also incapable of detecting e.g. the status of traffic lights.

Based on the above, it is rather obvious that none of these technologies are enough on their own. Naturally, one of the major areas of automated driving development is actually researching new sensors. One of these groundbreaking technologies is imaging radar. This new generation of radars will drastically increase both the horizontal and vertical resolution of current traditional radars. Similar gains can be expected of long-wavelength imaging LiDARs as well. In camera technologies the simultaneous detection of visible wavelengths and mid-infrared rays is promising. All of these development efforts are concentrated around a single goal. Creating a sensor void of the limitations mentioned above. The key to this technology will be finding the “perfect wavelength” unaffected by the changing conditions of the environment and sensor working principle providing image like resolution.

However, advanced driving assistance systems are already in production, not only development. Increasing the safety of everyday road transit may not be connected to the perfect sensor, but rather the best possible system of different sensors that utilizes robust sensor fusion. The table above illustrates the strengths of each of the sensors discussed. Based on this breakdown it is possible to create a system that plays each sensor in its strengths and limits the effects of their respective weaknesses.

The best solution? There isn’t one, at least in the traditional sense. There are only best solutions for certain use cases or within current design and production limitations. While currently, a fusion of camera and radar is the most viable for mass production (due to their low price), to achieve true self-driving each sensor must advance to a level where their performance is similar. It is specifically due to the unique strengths and weaknesses of each sensor type that they can create a robust and complementary system when working in unison.