Is liability bugging you?

News & insights

Check our latest stories on automated driving

Written by Zsolt Ujfalussy / Posted at 8/1/19

Is liability bugging you?

The question of personal liability in automated driving development is surprisingly common, as everyone is aware of the risks involved. At the first AImotive Meetup in Budapest, I presented the specifics of the Hungarian legal system to the over one hundred attendees. The key message was: standards are safe.

Employees that take reasonable care when developing automated driving technologies will rarely be held personally liable for an accident, at least in Hungary. As a result, developers and test drivers can continue their work in relative safety while maintaining the extremely high value added by their work.

The primary source of this safety in the existence of strict development guidelines and processes (for example, an internal Quality Assurance Policy or ISO 9001 compliance) and adherence to these. It is the responsibility of the chief executive officer to implement and enforce these. Of course, this is not a one-person task. The executive will be supported by the Safety and Quality department. The safety team will work with engineering teams to create internal policies and select relevant industry standards to follow at a system and coding level (such as ISO 26262, A-SPICE, MISRA, AUTOSAR C\++14). An engineer also takes reasonable care when adhering to industry best practices and other standards in their work.

Risk assessment is a vital element of public road tests, and risk management strategies must be laid down before automated systems hit the road. To ensure safety, it is best to test all functionalities first in simulation and then on closed courses. Furthermore, safety drivers should undergo rigorous training, while the company must have safety, risk management, and process management strategies in place.

If all internal and external processes, standards, and policies are in place and adhered to, developers have minimal liability. It is their employing company, as a legal entity, that will be liable for any accident or injury.

Naturally, the possibility of criminal liability also arises. In the vast majority of cases, the criminal liability of a single employee is almost out of the question. This is first because all development is carried out by teams that are continuously reviewing – and building on – each other's work. Secondly, the existence of and adherence to the aforementioned policies, standards, and best practices lay out a methodology to support and ensure reasonable care in the development process.

Nevertheless, there are a few cases that may be the base of criminal proceedings against a developer or engineer. However, there should always be processes in place to safeguard against these instances as well:

  1. Negligence or carelessness, in cases when the employee has no foresight of the results of their behavior because they fail to adhere to the guidelines of reasonable care.
  2. Recklessness or willful ignorance, when an employee has limited or complete foresight regarding the risks of their actions but continues the course of action regardless, hoping that the possible dangerous outcomes will not transpire.
  3. Deliberate actions or deliberate failure to act.

To put these slightly abstract cases into perspective, imagine the following. An engineer adds an unauthorized section of code to the system which induces sudden braking at 140 km/h. The developer does this as an unmalicious fault injection to stress test the system in simulation, in the knowledge that road tests are conducted at a maximum of 130 km/h and plans to remove the code before road tests begin. However, the build with the code is used in a high-speed closed-course test, which involves speeds over 140 km/h, and the developer is unable to remove his code beforehand. An accident occurs. The developer did not cause the crash intentionally but was negligent by failing to adhere to the proper development process. At AImotive, automated code-review and peer code-review are set up to identify similar unauthorized changes before release. Furthermore, extensive simulation testing is conducted before any code is implemented in a real vehicle.

Second, imagine a highway driving test is underway. The safety driver registers that the automated system is above the target speed and approaching the vehicle ahead. However, instead of taking control, the safety driver hopes the automated system will avoid the accident. By the time he or she does intervene, it's too late, an accident happens. The safety driver was reckless to assume the system would avoid the collision, despite indications of the contrary. To increase the safety of our road tests, we always have both a safety driver and test operator in our vehicles while strict policies govern where, when, and how much they can test certain functionalities.

Finally, consider that a developer has knowledge of a life-threatening error in the code base, but chooses not to inform others of its existence. As a result, they allow the code to be implemented on test vehicles, knowing that the error will cause an accident during testing. Massive simulation testing enables us to identify the vast majority of errors in our code before hitting the open road. There are also independent safety limits in our drive-by-wire systems that limit erratic driving commands from the automated driving solution.

As you can see, the liability of a developer, engineer, or safety driver is always a result of personal decisions and failure to adhere to policies, standards, and best practices. Naturally, it is the company's responsibility to ensure that employees are appropriately trained in these and to enforce them. If done right, industry and internal standards, policies, and processes are designed to mitigate any risk of an accident. The whole automated driving industry is aware of the risks of our development and is working to ensure the highest degree of safety. After all, above all else, our goal is to save lives.

This blog is the first in a series of blogs based on presentations given at the first AImotive Meetup in Budapest. To be notified of future events subscribe to our newsletter, or follow us on Linkedin.

Zsolt Ujfalussy is legal counsel and a technology expert at AImotive. Zsolt joined our team after gaining experience at various law firms. As an expert in the regulation of automated vehicles, he is also a member of the Legislation and Standardization Work Team of the Mobility Platform which works with Hungarian legislators to regulate electrified, connected and automated vehicles. He holds law degrees from Eötvös Loránd University, Pázmány Péter Catholic University and the University of Cambridge.