Do AVs need distributed computing?

News & insights

Check our latest stories on automated driving


Written by Márton Fehér / Posted at 5/17/19

Do AVs need distributed computing?

Currently, many AVs use a powerful central compute platform, but as the technology, safety standards, and regulations mature even these platforms cannot deliver the performance needed. Distributed processing offers some answers, especially for AVs.

AI isn’t just about Neural Networks (NN)s

Many people believe that AI is only about NNs and how to execute them. However, a broad set of algorithms is used to perform AI tasks – and many of them don’t use Neural Networks (NNs) at all. For example, a number of well-known image processing algorithms rely on sophisticated software written the way much of today’s code is written – in a sequential language such as C++, compiled and executed just like any other program. As a result, any AI hardware platform for AVs needs to support substantial general-purpose computing, as well as NN computation.

Why not just have one big, fast computer then?

Many people also associate AI with big datacenter computing (read Tony King-Smith’s blog on the subject). Datacenters are all about performance, so it’s not surprising that when AI appears in autonomous vehicles (AVs), people start talking about putting “a datacenter under the front seat”. In other words, putting as much general purpose processing power into a car as possible. The problem is, it’s far too power-hungry, expensive, and if something goes wrong, a lot goes wrong. Furthermore, packing so much compute power into a relatively small space means big power, thermal and reliability issues.

Computation for AI involves a number of different stages. The early stages, sometimes referred to as the “front end”, take raw data from sensors and try to make sense of it. The most common tasks are segmentation (identifying the shapes to be recognized), perception (deciding what those shapes are) and free space detection (deciding which areas are free of objects, so are safe candidates for the vehicle to travel in).

The later stages of computation (the “back end”) need to bring all of this data together in one place. All inputs are then evaluated for the vehicle to decide what they mean. Control is then applied based not only on what has been received, but anticipating what might happen next.

There is little doubt that back-end processing needs to be done centrally. However, the more front-end processing can be done away from the central processor, the less powerful the central processor needs to be.

For many AI applications, front-end tasks tend to be dominated by NN computation. Thus, it makes sense to put some of the NN compute engines near the sensors themselves. This is known as distributed edge processing. Putting computation in multiple places throughout the vehicle, rather than having everything executing on one central processing platform, makes the vehicle’s overall electronics hardware platform easier to implement, manage and scale.

So we don’t need central processors?

The central processor will always be a crucial part of any AV, as everything has to be brought together at some point to control the vehicle. It also provides the most effective way to deliver high-performance algorithm execution for the many functions needed to control modern vehicles.

As the central processor needs to execute a wide range of different algorithms, it will require accelerators for many different types of computation. That’s why they will have clusters of CPUs with different capabilities, augmented by GPU clusters, floating point vector processors, DSPs – and of course NN accelerators.

Distributed processing helps central processors

There are many benefits to distributing processing platforms. By putting some of the compute power in different physical locations (e.g. near each sensor, group of sensors, or per corner), it is easier to provide power to each unit, and to manage the heat they generate. Also, if each unit performs some initial processing locally, it generates higher level output data that is often significantly smaller than the raw sensor input data. That means it needs to send less data to the central processor, reducing communication and memory demands.

The electricity grid is a good parallel. Houses aren’t connected directly to the central power station. Instead, power is distributed to many groups, which makes each local substation easier to build, and ensures that if one breaks the rest of the electricity network remains operational. It also makes it easier to have multiple redundant systems if needed.

Distributed processors help by reducing the processing demands on central processors. This makes each part of the hardware compute platform easier to design and manage, and makes it more reliable and robust. Moreover, it’s not a new idea. We’ve been using some form of distributed processing – in everything from government to business, from electricity to mail distribution – for thousands of years!

Cars are built by Ecosystems

Vehicles are never built from the ground up by one company. Building cars involves a complex network of highly skilled specialist partners. The automotive supply chain has traditionally relied on large subsystem companies (known as Tier1s) to provide each of the Electronics Control Units (ECUs) for a vehicle’s various functions.

By embedding a subsystem vendor’s unique know-how into the software and hardware within the ECUs powering their particular sensor, actuator or other solution, they create a more powerful product for a specific task.

Sensor manufacturers and their partners – chips suppliers, module suppliers, ISP suppliers – are realizing this is a great opportunity to add value to every sensor. For example, by adding NN processing alongside the ISP for cameras, the sensor can now deliver information of much higher quality to the central processor. This opens up new opportunities for Tier1s and other component suppliers to add new features to make their products more competitive.


Distributed processing is one way to augment the central processor of an AV to increase total processing power faster, more reliably, and cheaper. NNs are increasingly used throughout the vehicle. Small, efficient NN hardware accelerators, such as AImotive’s aiWare can be deployed throughout every subsystem of an AV, from each sensor through to the domain controllers and central processing clusters. Understanding the best ways to use such accelerators is becoming an important part of designing future hardware platforms for AVs.