Key takeaways
The AI Liability Directive is the EU’s proposed new law to make it easier to prove liability in cases where AI systems cause harm.
The proposal was published by the European Commission on 28 September.
This legislation -- which reinforces the EU AI Act -- updates national civil liability rules across Europe, making it easier for victims of AI-induced harm to prove who is liable and to receive compensation for damages.
The new law has wide-ranging implications for enterprises which develop and/or deploy AI systems in the EU.
Enterprises should be aware of the possibility that they may be liable for harm caused by outputs of their AI systems. As well as paying damages, enterprises may also be required by courts to disclose sensitive information about their AI.
It is therefore vital for enterprises to establish robust AI risk management processes and to prepare for compliance with the EU AI Act, to minimise legal, reputational and commercial risks.
The AI Liability Directive is designed to ensure that victims who suffer harm or damage caused by AI systems enjoy equivalent levels of protection, under European civil liability rules, to victims who suffer harm caused by traditional technologies or products.
The European Commission’s position is that existing product liability rules are inadequate for addressing AI-related harm, given the difficulties in proving a causal link between the harmful output of an AI system and the fault or negligence of an individual or organisation.
By updating civil liability rules to reflect the unique properties of AI systems (i.e., opacity, ‘black box' decision-making and autonomy), the Directive aims to boost trust in the use of AI and promote the safe adoption of AI technologies.
The AI Act and the AI Liability Directive are two sides of the same coin.
The AI Act is designed to prevent harm caused by AI, whereas the AI Liability Directive is designed to ensure that victims are fairly compensated if harm occurs.
The Artificial intelligence Act proposes a series of mandatory requirements for the ‘providers’ and ‘users’ of high-risk AI systems, such as those used in the HR or banking contexts.
The mandatory requirements include establishing risk management frameworks, conducting quality assurance testing, and maintaining technical documentation and record logs about the system’s functioning.
By exposing enterprises to the possibility of being held liable for harm caused by their AI systems -- and directly linking non-compliance with AI Act to liability for AI-induced harm -- the Directive incentivises compliance with the AI Act.
The European Commission is proposing two core measures.
They both serve to ease the burden of proof for victims attempting to prove who is responsible for the harm that an AI has caused them:
Five years after the Directive has been implemented by EU member states, the European Commission will conduct a review, to establish whether these reforms are sufficient to protect victims of AI-induced harm.
The review will consider whether more robust liability measures should be introduced, such as mandating insurance for certain high-risk AI systems, and strict liability provisions (i.e., where an individual or entity is liable, without the claimant having to prove fault or negligence).
Existing liability rules require victims to prove a wrongful action or negligence by a person or organisation who caused the damage. This places the burden of proof firmly with the claimant. This Directive is trying to address this issue in the AI context.
The Directive empowers courts across Europe to order enterprises to disclose relevant information about their AI systems in legal proceedings.
This information will assist claimants in proving that defendants are liable for the harm.
Disclosures of evidence will be required in situations where the high-risk AI system in question is suspected of causing damage and the claimant has taken “all proportionate attempts” to gather evidence about the high-risk AI system from the defendant.
Courts must only request the disclosure and preservation of evidence which is “necessary and proportionate” to support the claim for damages. Courts must also consider whether trade secrets will be disclosed, and take steps to maintain the confidentiality of that information.
Under these proposals, enterprises may be obliged to disclose information relating to:
To lessen the burden of proof falling on victims of AI-induced harm, the Directive introduces the ‘presumption of a causal link’ between non-compliance with relevant laws and the damage which the AI system has caused.
This means that in situations where the AI ‘provider’ or ‘user’ did not comply with a law which was intended to prevent the harm or damage that was caused, such as certain provisions of the AI Act, courts will assume the defendant is liable for that harm, unless the defendant can prove that they are not.
For the ‘presumption of causality’ to apply, the following conditions must all be met:
In these situations, the burden of proof falls on the defendant to demonstrate to the court that their non-compliance did not cause the harm.
Claimants can be the injured individual or an individual or entity which has assumed another party’s legal rights to collect damages, such as an insurance company or the heirs of a deceased person.
The AI Liability Directive will likely become law within the next two years, while the EU plans to adopt the AI Act within the next year.
These proposals oblige enterprises to establish comprehensive AI risk management frameworks for the development and deployment of high-risk AI. They also make it easier for enterprises to be held liable and pay damages for AI-induced harm, especially where they have not fully complied with the AI Act’s provisions.
Given the spotlight being shone on AI risk by European legislators, and the vast GDPR style fines or damages enterprises may have to pay if they fall foul of EU rules, forward thinking organisations should act now to establish robust AI risk management systems, to ensure that their AI risks are detected, minimised, monitored and prevented.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts