Simulation lies at the core of our unique development pipeline. I’ve detailed in earlier posts how we utilize simulation in development to be safety conscious and make testing more economical. However, incorporating simulation technology into the development of self-driving cars is itself no easy feat.
Keeping in time – aiSim, our highly realistic simulator support two modes. Fixed time step and real-time simulation. Both modes have unique properties and serve different purposes. Fixed time step runs on heterogenous hardware setups and provides deterministic results when evaluating the logic of algorithms. Real time requires high-end hardware and is extremely power hungry. However, it allows us to verify the runtime of our algorithms and provides visual feedback of how the car behaves in scenarios.
Integrating the whole system into development is also a multistep process. The first are automatically triggered pre-commit tests that run when the slightest changes are made to the code. This set contains scenarios that aiDrive has always passed with previous versions. Indicating a definite error before the code even reaches code review supports regression free development and accelerates iteration times.
Daily tests are uniquely configured by developers to test certain functions of the build they are working on. Running aiDrive and simulator version selected by the developer on scenarios hand-picked to best test new features. Nightly tests on the other hand, are bulk tests run automatically, going through 1600 scenarios multiple times each night. While daily tests focus on single features, nightly tests test the whole of the self-driving software. The greater variety of scenarios in nightly tests helps us identify problems that may not have been seen daily tests.
However, the greatest advantage of simulation is modular testing. In this development teams responsible for certain functions can evaluate their system by substituting other inputs with ground truth data provided by the simulator. This means that the system is receiving the best possible information for every aspect but the one under test. If the module works the test is a success. If they encounter a problem, it is much easier to localize without having to examine the whole stack.
Facing reality – There is a slight problem with simulated testing. It’s not the real world. One of the greatest challenges of the technology is correlating vehicle physics in the simulator with the real world. For example, there is a real-world limit to how quickly a car can accelerate without wheel spin. That limit has to be the same in the simulator, to ensure that we can test vehicle dynamics properly. However, these characteristics are different for each vehicle, road surface and are influenced by weather conditions as well.
One of the great advantages of simulation is how it provides a great deal of information. However, when thousands or even tens of thousands of tests are run over a single night processing all the information itself is a challenge. Fixed time step simulation provides huge amounts of numerical information, which can be used to create metrics and graphs.
However, the majority of data from real time test, especially the visual representation of car behavior still requires a human to review. The are several possible solutions, such as concentrated testing and development on one function, tagging known bugs and errors to avoid reprocessing the same. Another option would be to set up an automated preliminary review system. However, such a solution raises the question of whether an automated test system overseen by an automated preliminary review system is actually desirable.
Simulation is already an immensely useful tool, and as other challenges, not covered here, such as simulating sensor latency and overcoming the limitations of software engines are solved its usefulness will only increase. Serving as the only viable solution to the large-scale testing of autonomous vehicle technology, self-driving car simulators are a challenge in themselves. One we are well on the way to cracking.
This post is a part of a more detailed presentation on simulation I gave at the Embedded Vision Summit in Santa Clara this week.