Simplifying Legal Liability for AI in the EU

News & insights

Check our latest stories on automated driving

Written by Gábor Farkas / Posted at 6/27/23

Simplifying Legal Liability for AI in the EU

Artificial intelligence (AI) is transforming the automotive industry with automated and autonomous driving capabilities. As the European Union (EU) introduces new legislation in this field, it is crucial to understand its implications. In September 2022, the European Commission presented a new directive called the AI Liability Directive (AILD), which aims to adapt non-contractual civil liability rules to AI. From an automotive perspective, it must be noted that the AILD proposal relies on the EU's proposed AI Act, released in 2021, which does not cover safety-related AI systems in vehicles (Regulation EU 2018/858, the Type-Approval Regulation regulates most vehicle-related components).

The AILD intends to ensure that individuals harmed by AI systems receive the same level of protection as those harmed by other technologies in the EU. It does not supersede existing rules, such as the Product Liability Directive (PLD) or national liability regimes, but introduces a new fault-based non-contractual liability framework specifically for damages caused by AI systems.

Key Features of the AILD – definitions and rebuttable presumptions:

Under the AILD, a "claim for damages" refers to a noncontractual fault-based civil law claim for compensation for the damage caused by an output of an AI system or the failure of such a system to produce an output where such output should have been produced. Unlike the PLD, this claim can be raised against any entity involved, not just the manufacturer. To support claimants in the complex AI environment, the AILD establishes two important presumptions: the presumption of causality and the presumption of non-compliance. However, the defendant can challenge these presumptions (rebuttable presumptions).

Presumption of causality: If a victim can demonstrate that someone failed to comply with an obligation relevant to their harm and that there is a reasonable likelihood of a causal link with AI performance, the court may presume that this non-compliance caused the damage.

Presumption of non-compliance (regarding evidence disclosure): If a defendant fails to comply with a court order to disclose or preserve evidence, the court may presume that the requested evidence was able to prove non-compliance with a duty of care obligation.

Adapting to the New Liability Regime:

Companies responsible for high-risk AI systems must provide specific documentation, information, and logging requirements to address these rebuttable presumptions. While the AI Act does not directly apply to safety-related AI in the automotive industry, legislators should consider its principles when amending Type-Approval Regulations. AI-related parties, including manufacturers, developers, and service providers, must comply with principles such as risk management, data governance, transparency, human oversight, accuracy, and security to be exempt from AI liability.

The AILD proposal, while wide-ranging, suggests that it could generate a significant increase in the AI market value in the EU (generate an increased AI market value in the EU between ca. EUR 500mln and ca. EUR 1.1bln in 2025) by introducing a harmonized and calculable liability regime. However, market players express concerns that extensive liability might hinder innovation. Therefore, balancing legal liability and fostering innovation is essential to providing a solid legal framework.

Conclusion:

As AI advances in the automotive industry and other sectors, harmonized legal liability measures are necessary to ensure responsible innovation. The AILD proposal aims to provide adequate protection for individuals affected by AI systems while also promoting the growth of the AI market. By navigating the complexities of liability and maintaining a supportive legal environment, the EU can foster innovation and safeguard the interests of all stakeholders involved. We at aiMotive make utmost efforts to closely follow both the AI liability legislation and the improvements of Type-Approval Regulations, and implement the new regimes as soon as practical.