AImotive Blog

Gergely Debreczeni – How self-driving cars understand the world (map)


October 01, 2018
Gergely Debreczeni our Chief Scientist presented at the Auzto.ai conference in Berlin.

It may sound trivial, but getting a self-driving system to understand its own movement is no easy task. 

The Basics – The problem of ego-motion estimation, localization and the concept of true autonomy are deeply connected. However, to understand their relationship we first have to examine the terminology. Ego-motion estimation is the ability to determine the relative translation and orientation of the ego-vehicle with relation to its own state in a previous point of time. (How much the vehicle has moved and how it has turned between t1 and t2.) Localization is the ability to determine the vehicle's position and orientation with relation to local road elements (lanes, road-side, etc) or a global coordinate system (GPS) and road network.

About self-motion – For a self-driving car to work, it must first understand itself. This is not an allusion to some form of an ultra-intelligent vehicle after the singularity. This self-understanding is limited to the vehicles understanding of its physical location in the world, and how movement changes where it is. At first glance this is trivial. But for a computer system, the question is not so simple.

Without robust ego-motion, a self-driving car has as much understanding of its own movement as a pebble thrown into a pond. Understanding self-motion happens on multiple levels.

  1. The vehicle must be able to measure, estimate its own inertial motion (acceleration),
  2. It needs to understand its own speed and location in relation to local road elements (road surface, road markings) and other participants of traffic.
  3. Finally to be able to navigate from A to B the vehicle must also have a notion of its global position, be able to place itself on a navigation map (SD map).

Through these stages, the system can understand where it is in the world, how it moves and where it is heading. Precise ego-motion estimation is important but robustness is the key. One of the most precise solutions to measure ego-motion is based on visual information, but it is prone to occlusion, bad weather conditions and could fail in a featureless environment. A scalable and truly operational way is to use wheel speed information coupled with an IMU sensor(s), optionally augmented with camera information. The vehicle can then use the vector (6 degrees of freedom, 3 translation, 3 orientation) of its movement to track its position over time.

An inforgraphic detailing Gergely1s presentation given at the Auto.ai conference in Berlin.
Drawn live at Auto.ai this infographic summarizes Gergely's presentation.

The need for true autonomy – As in many safety-critical environments, precision is outweighed by robustness. Consider a situation in which an autonomous system executes an emergency trajectory because of a widespread sensor failure. In this case, it is vital to ensure the vehicle comes to a stop on the hard shoulder as quickly and safely as possible. Speed and trajectory are vastly more important than where the vehicle is on a map. A safe stopping location can be found (and is always identified) based on the perception system, without a map. However, the car must still understand where it is moving and when it has stopped. The possibility of such errors means the vehicle must be able to reach a safe state in any environment.

As a result, no solution that requires external connectivity is sufficient for this purpose. The system should first believe its own eyes and use information provided by external systems only for safety crosschecks. Such an approach requires highly reliable and accurate on-the-fly environmental reconstruction capabilities. Another reason why the robust, redundant and complementary sensor system I discussed in my previous blog is so vital. This is also why we consider maps just another sensor.

How to use HD maps – Current ADAS and autonomous solutions use HD maps in different ways:

  1. Some doesn’t use any (HD) maps, only on-the-fly detection.
  2. Some use (HD) maps for localization only.
  3. Others use maps for localization and also extract higher-level semantic attributes from them (substituting/complementing on-the-fly detections)

And of course, proper SD maps are used for navigation in all cases. Heavily (or only) relying on semantic information from HD maps (for example where lanes are) instead of detecting them on-the-fly with a high confidence level could be dangerous. Detailed 3D HD maps require constant updates, while temporary changes such as roadworks may not be added to them. The availability of such maps also geofences autonomous solutions into limited areas. As a result, while extra information from HD maps is a welcome safety bonus, an autonomous system cannot rely solely on them.

We believe that self-driving functionality should come from sensors on the vehicle, which gather real-time information. Maps are a tool to effectively increase the safety of autonomous systems. Diversity is key to robustness, that's why all sources of information are welcome. If they're used according to their strengths.

About the author: Gergely Debreczeni has 15+ years of R&D expertise in the fields of physics, mathematics,computer sciences and machine learning. Holding a phd in Particle Physics he is AImotive's chief scientist.