The fact that simulation is needed to develop self-driving systems has, over the last couple of years, become a banality. However, the complexity of using simulation is often overlooked. Over the summer I provided insight into how AImotive has built a development pipeline around aiSim, our in-house developed simulator for autonomous vehicles. Working alongside our self-driving researchers our aiSim team has a deep understands of the demands an autonomous system places on a simulator. The result? We understand that simulators will have to do even more in the future.
Current virtualization softwares are commonly based on game engines. This is an obvious solution, as they provide a platform for immediate development. Furthermore, as the whole game industry uses them, there are countless developers and engineers well versed in using them. Because of their wide user bases support is readily available, and their graphical quality advances due to popular, consumer demand. This gives teams looking into simulation for self-driving a starting point with minimal overhead. It is no accident that when aiSim development started, we ourselves turned to a game engine for rendering. However, despite all their inherent advantages, game engines have limitations for the purposes of autonomous driving.
Disadvantages that stem from their advantages. They are built with consumer end-users in mind and, as a result, are not optimized for high-end industrial scale computing. They are designed to bring the most out of more everyday PCs in end users’ homes, rather than multi-GPU-CPU setups. A game engine is designed to aesthetically appeal to gamers, and consumer users. To this end they employ unique effects and textures that do not necessarily have to correspond to the rules of physics in our reality. Finally, every scene is slightly different every time it is loaded. In a game a few pixels make no difference, for a self-driving system a few pixels effect the distance of an object in the simulated world. As a result, despite loading the same scenario, the test will be ever so slightly different each time.
However, to achieve peak efficiency, accelerate development even further, and achieve the fastest possible time to market these limitations have to be overcome. This is because during development engineers will run simulated tests on a wide range of heterogeneous hardware setups, from personal laptops to the cloud. To ensure efficient regression free development the way the scenario is rendered must be independent of the hardware used.
This is where the limitations above become a problem to be solved. If a simulator is not deterministic, and thus renders scenes with small differences on each load a developer will never be 100% sure that a failure was caused by a buggy scenario load, or by a shortcoming of the self-driving software being evaluated. It would then take further time and resources to evaluate each running to ensure that everything happened as intended and require further tests to prove this.
The tests also have to contain complex scenarios running in real time to test not only the self-driving software but the hardware platform it is running on through hardware-in-the-loop testing. As a result, the simulator must be able to extract the best performance possible from any of the hardware setups. And this makes determinism vital to ensure that, as mentioned above, the system the simulator is running on be it a single GPU system or a cloud-based solution does not affect the rendering of the scenarios being tested.
Can the simulator above be built based on a game engine? It would be almost impossible. The solution will have to be purpose-built and customized specifically for the use cases of autonomous technology testing and verification. We believe that by recognizing these limitations AImotive is already taking steps to overcome them.