Modified Liability for AI: EU Review of AI Liability Rules


7 minute read | May.25.2023

The EU AI Act, which is progressing towards adoption by the end of 2023, will be complemented by an update to the European civil liability rules applicable to AI-related claims. Towards the end of 2022, the EU Commission adopted a proposal for an AI Liability Directive (“AI Liability Directive”) and a proposal for a Directive on Liability for Defective Products that will update product liability rules dating from 1985 (“Revised PLD”). The objective is to prevent a fragmented approach by the EU Member States and it should also help both victims and manufacturers/users of AI systems to better assess the liability deriving from the use of such systems.

There is no overlap between the two proposed directives, which are meant to complement one another: the Revised PLD sets out updated rules regarding a producer’s strict liability for defective products (liability without the need to prove fault) and the AI Liability Directive addresses liability claims based on fault in order to compensate victims for damages arising from the use of High-risk AI Systems.

Since the proposal for an AI Act has recently been amended to account for a surge in the use of generative AI Systems, the proposed civil liability rules will also likely need to be reviewed and amended in parallel.

What You Need to Know

1. The AI Liability Directive

1.1. Purpose and scope

The proposal complements the draft AI Act and uses the same defined concepts as that regulation (e.g., “high-risk AI systems”). It provides rules for a non-contractual, fault-based liability regime for damage caused by AI Systems. It applies to AI systems that are available on the EU market or operating within the EU market. It applies to all categories of AI systems regardless of whether the relevant economic operator commercializing the products is established in the EU.

1.2. Who is concerned

The rules will apply to providers and/or users of AI Systems as defined in the AI Act proposal (users are referred to as “deployers” in the most recent draft of the AI Act). The text provides a regime favourable to the claimant for all types of AI Systems, and even more so for High-Risk AI Systems.

 1.3. Evidence rules relating to High-Risk AI Systems

In order to help claimants substantiate a non-contractual fault-based claim for damages, the proposal gives national courts the power to order the disclosure of evidence about a High-Risk AI System by providers or users. The disclosure of evidence is limited to what is necessary and proportionate to support a potential claim for damages, taking into account the legitimate interests of all parties. The claimant only needs to present facts and evidence sufficient to support the plausibility of a claim for damages. Where the disclosure of a trade secret or alleged trade secret that the court has identified as confidential within the meaning EU regulation, national courts are empowered to take measures necessary to preserve confidentiality. This is a significant change for civil law jurisdictions where, typically, the claimant must substantiate and prove all aspects necessary to establish a claim without any right to request evidence from the defendant.

1.4. Rules regarding burden of proof

Subject to specific conditions, the text provides a rebuttable presumption of causality in favour of claimants. The presumption establishes a causal link between non-compliance with a duty of care under applicable law (fault) of the defendant and the output produced by the AI system used by the defendant or the failure of such AI system to produce an output that gave rise to the relevant damage.

The rebuttable presumption regime varies depending on whether the defendant is a provider or a user of AI System. It also varies depending on whether the concerned AI system is a High-Risk AI System. In case of a High-Risk AI System, if the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link, the presumption of causality will not apply. Concerning other AI systems, the presumption of causality only applies if the court determines that it is excessively difficult for the claimant to prove the causal link.

2. The Revised PLD

2.1. Purpose and scope of the Revised PLD

The Revised PLD will allow individuals to claim compensation for harm caused by a defective product. Harms covered include personal injury, damage to their property and/or data loss. It updates the existing regime to cover all categories of products, including software, AI Systems and AI-enabled products. The text covers products placed on the EU market or put into service in the EU regardless of whether the relevant economic operator commercializing the products is established in the EU. Although the Revised PLD is not presented as a complement to the proposal for an AI Act, it uses the same defined concepts and refers to its provisions.

2.2. Who is concerned

Depending on the facts, either the manufacturer, importer, fulfilment service provider or online platform can be held liable. Liability can arise for modifications made to AI Systems already placed on the market or put into service. (This complements the approach adopted in the AI Act: users that modify an AI system can be subject to additional obligations under that regulation.)

2.3. Defectiveness assessment based on the AI Act proposal

The Revised PLD states that mandatory safety requirements should be taken into account when a court assesses if a product is defective - if an AI system or AI-enabled product is defective and causes death, personal injury, property damage or data loss, injured people can use the Revised PLD to claim compensation. The safety requirements set out in the AI Act proposal shall in particular be taken into account when assessing the defectiveness.

2.4. Burden of proof

Under the Revised PLD, the claimant is required to prove the defectiveness of the product, the damage suffered and the causal link between the defect and the damage. However, the product defect and the causal link shall be presumed, in specific circumstances.

The text also proposes to alleviate the burden of proof when the claimant faces excessive difficulties to prove the existence of a defect and/or the causal link between the defect and the damage, due to the technical complexity of the product.

Although the text of the law does not specifically mention AI systems and AI-enabled goods, the EU Commission has expressly identified these products as being covered by this presumption.

What's Next?

The Revised PLD and AI Liability Directive proposals are quite recent and are still being debated by the European institutions. The texts will therefore evolve before adoption. However, their core principles and interactions with the proposed AI Act should remain. Also, and as these two proposals are directives, they will have to be implemented by EU Member States into their respective national systems with more flexibility provided to Member States in the implementation of the AI Liability Directive than in the implementation of the Revised PLD.

What You Need to Do Now?

All actors of the AI supply chain are concerned. Although the texts have not been adopted yet, providers, importers, distributors and users of AI Systems in the EU should start to consider how their terms of business may be impacted by the application of both the AI Act and the new liability regime. This new regime should also be kept in mind in the context of negotiations of long-term contracts of sale or supply of AI Systems.