Precision simulation demands flexibility and scalability

News & insights

Check our latest stories on automated driving

Written by Zoltán Hortsin / Posted at 12/19/18

Precision simulation demands flexibility and scalability

Last week AImotive announced aiSim2, the next generation of automotive simulation for automated driving technologies. Beforehand, our CEO, Laszlo Kishonti, stressed the key three features of a purpose built-simulator for self-driving development: determinism, physical realism and scalability. Physical realism is vital to properly approximate the physical attributes and details of reality in a virtual environment. This requires a wide range of knowledge from physical laws to advanced 3D algorithms. And because of its complexity, we will need a lot of performance. This is where scalability comes into perspective.

Today, I will explore some technical points of this solution.

The laws of physics in action – Since the imaging of the virtual environment must be as close as possible to its real counterpart we need a deep understanding of how light is born, travels through our planet’s atmosphere, interacts with objects of our environment, enters our eye or sensor and finally becomes a signal. Several subjects are affected by this process, but we must keep in mind the need to balance accuracy and performance.

We must see how these laws create the whole picture.  The sky is a good example, if you look up at the sky you see blue instead of photons interacting with air molecules causing Rayleigh scattering. Or when you look down to your shadow it is not completely black because the sky acts like a huge source of light, the rays of which have the same origin as your shadow and the color of the sky. Thus, it’s vital to capture the whole image at once to create a physically plausible result, because everything interacts with everything

Perceiving the world – Recreating the world requires this breadth of understanding, and often the ability to render many captures of the environment simultaneously. One of the major challenges of simulation for autonomous technologies is the need for highly realistic sensor simulation. This is required to ensure that the system being tested receives data as similar to that it would be fed in the real world as possible. It is the only way to achieve the highest possible correlation between real-world and simulated tests. 

Simulating certain unique sensor types, for example, fisheye-lenses, require the rendering of multiple images for realism, due to the inherent characteristics of GPU rasterizing and the limited availability of raytracing technology in real-time applications. As a result, a simulator will often be forced to render up to six images to create a single sensor frame. That’s a huge amount of rendering when a self-driving system can easily use 4–6 fisheye cameras alongside other sensors. It’s vital that the simulator does not lose performance to driver overheads. We found that the Khronos Group’s™ Vulkan® API is a very efficient solution for this problem.

The Vulkan API is designed in a way that supports the optimized running of parallel tasks on a GPU through subgroup operations. Relying on this capability the resources of every available GPU can be maximally utilized during simulation. This is a vital aspect of drawing the maximum performance from any system, as aiSim2 is designed to run on any setup, from a developer’s laptop to cloud servers. This flexibility is underscored by how aiSim will run the same codebase on Windows or Linux. It is also easy to run the simulator on a headless setup, a characteristic which is vital for deployment on server architectures.

Multiple chips  Once a single GPU is properly utilized the focus must shift to connecting several CPUs and GPUs to achieve optimal performance. This requires optimal task distribution between all available cores, relying on CPUs to build the rendering tasks, and GPUs to handle the rendering itself. These characteristics of the API and aiSim result in the efficient utilization of complex server system architectures. A requirement of running the tens of thousands of tests needed daily during autonomous system development.

Finally, aiSim has been designed to facilitate collaboration. The characteristics listed above mean that our partners can easily integrate aiSim into their development pipelines with minimal overhead. This means aiSim partners are not locked into a single ecosystem or solution.

Conclusions  Creating a scalable and plausible simulation environment is completely possible with current technologies, however, requires careful planning. Alongside over 10 years of experience in render engineering brought over from AImotive’s predecessor, Kishonti Informatics, it was our on-hands knowledge of self-driving development that influenced the design on aiSim2. Standing on these two pillars aiSim2 is a purpose-built and highly optimized simulator for the development of automated driving systems.