Distributed vs. central processing: the eternal debate

News & insights

Check our latest stories on automated driving

Written by Tony King-Smith / Posted at 10/16/18

Distributed vs. central processing: the eternal debate

What debate? – Since the first CPU was invented, people have held many opinions about the best way to make them faster, better, cheaper. As systems grew based on having a processor at their center, the debate has become ever more complex: should I centralize all processing, or should I distribute it across different parts of the system?

It’s a lot like governing a country or running a multinational corporation. The issues are the same: balancing the speed of decision making by centralizing everything, against the effectiveness of local decision making co-ordinated at centrally at a high level. Large vs. small government; large business HQ vs. stronger regional operations; large central processing vs distributed processing: the debate is based on surprisingly similar arguments.

Since the first CPU was invented, people have held many opinions about the best way to make them faster, better, cheaper. As systems grew based on having a processor at their center, the debate has become ever more complex: should I centralize all processing, or should I distribute it across different parts of the system?

It’s a lot like governing a country or running a multinational corporation. The issues are the same: balancing the speed of decision making by centralizing everything, against the effectiveness of local decision making co-ordinated at centrally at a high level. Large vs. small government; large business HQ vs. stronger regional operations; large central processing vs distributed processing: the debate is based on surprisingly similar arguments.

The case for centralized processing – The case is pretty compelling. Build the fastest, most capable processor possible. Place it at the center of your system, and make everything else send its data to it, receive data from it, and be controlled by it. When you upgrade the central processor resource, everything benefits. Moreover, everything outside it is as simple as possible, making them cheaper. Its also more flexible – just update the software, no matter what the application is. And, by having centralized data resources, every application has easy access to everything it needs in one place.

Not a bad story? Good enough for it to be the basis of mainframe computers, and later the essence of modern cloud computing. However, nothing in this world is quite as easy as it sounds. When centralized processing gets as sophisticated as today’s data center servers or the latest mobile phone SoC (system on chip) some major obstacles arise. Especially for something as specialized as an autonomous car. Managing the sheer complexity of these systems; the power consumed by a general-purpose device rather than a specialized one; and the challenges of ensuring everything keeps working when one part fails are but a few of the issues to be faced.

Table detailing the pros of both centralized and distributed processing.

 Table detailing the pros of both centralized and distributed processing.

 

The case for distributed processing – The case for distributed processing is perhaps in the extreme more idealistic. Each component in the system has sufficient processing power to do as much as possible locally. Only when it needs to share its data with others does it communicate with other processors in the system. The central processor is relatively small, providing overall supervision, control, and monitoring. Since all the processors are modest and highly specialized, they are far simpler, consume far less power, and are much faster than using more general-purpose alternatives. It’s much easier to upgrade any processor in the system, provided it keeps communicating the same way with the others as before. Verification is easier because you can test each module separately. It's cheaper too, as you don’t need such advanced technology to make each processor.

There is no best solution – Both centralized and distributed have their strengths and weaknesses. In especially complex use cases, such as self-driving, different applications will result in different architectures. As AImotive’s extensive experience in developing its own aiDrive software stack has shown that a significant part of modern AV algorithms require complex heuristics, not just NNs. Dedicated solutions may prove the best option for NN acceleration, but powerful central processors are undoubtedly the best solution for the heuristics.

Industry stakeholders will surely pour significant resources into the development of both centralized and distributed solution, or create a healthy mix of the two. AImotive’s goal is to provide our partners the best possible hardware architecture regardless of their development plans. This is why we place such emphasis on the scalability of aiWare. To ensure that we are not deciding the debate for our partners but offering a viable solution for the choices they make. The aiWare architecture is designed to be implemented in both distributed environments such as smart sensors, or as part of a central processing hub.

As a result, we believe that through its flexibility and scalability aiWare helps drive autonomous vehicles towards production.