Organizations are increasingly investing in AI tools and systems to enhance their processes and products, and maximize value. AI’s integration with businesses is expanding globally, with recent estimates suggesting that about 42% of companies are using AI in some way. However, the risks associated with AI systems can cause major harms, especially if appropriate business practices and safeguards are not put in place.
Many AI systems fall under the scope of existing laws, and the cost of non-compliance can be very high if an organization is sanctioned. This cost is likely to get higher as regulators globally have reiterated their authority and resolve to take appropriate action against non-compliant AI systems and applications.
Moreover, in addition to the financial risks of non-compliance in terms of penalties, non-compliance can also impact reputation and reduce trust in systems or organizations, which in turn can have additional financial implications. As such, compliance with existing laws, as well as AI specific laws, is essential for those developing and deploying AI. This blog post explores some of the penalties that have been issued against AI systems under existing laws.
The majority of the penalties and fines issued for AI thus far have been in the EU as authorities have cracked down on the processing of data by AI systems under the GDPR. With the upcoming EU AI Act also introducing heavy penalties - up to 7% of annual worldwide turnover – it is crucial that organisations take note of how to remain compliant. Here, we survey some of the heftiest fines issued for AI systems in the EU.
In 2021, the Luxembourg Data Protection Authority CNPD (Commission Nationale pour la Protection des Données) fined Amazon a whopping €746 million for GDPR non-compliance. Amazon appealed this fine in January 2024, which the court is currently hearing.
According to the CNPD, the targeted ad system Amazon employed was found to be processing personal data and conducting behavioural advertising without proper consent, but under local laws, details of the case have not yet been made publicly available as the appeal process is underway.
This penalty marks the second-largest fine imposed under the GDPR since it was enacted, the largest being a historic €1.2 billion fine for Meta for violating privacy rules by transferring personal data of people in the EU to the US.
In December 2022, the Irish Data Protection Commission (DPC) fined Meta Ireland €180 million for breaches of the GDPR relating to Instagram and €210 million relating to Facebook. Both were found to be using profiling, behavioural advertising, and algorithms without adequately informing its users.
This occurred because Meta Ireland had changed its Terms of Service to move away from using proper consent as a means to legitimise personal data processing, towards relying on a ‘contract’ legal basis for this processing, where users had to accept these new terms without which they could not continue using the services.
The DPC found that Meta’s communications about algorithmic processing and consent to users was not transparent, violating Articles 12 and 13(1)(c) on transparent information and specified legal basis for processing personal data. Meta was also found to have contravened Article 5(1)(a), which preserves the principle that personal data must be processed lawfully, fairly, and transparently. The DPC however took the view that Meta Ireland could rely on the legal contract basis for data processing.
However, resolving the violation was not a straightforward process. Under GDPR procedure, DPC draft decisions have to be communicated to peer regulators in the EU, who are known as Concerned Supervisory Authorities (CSAs). While the CSAs agreed with parts of the DPC’s draft decisions, they disagreed with the DPC permitting the contract legal basis argument. As the DPC and the CSAs could not reach an agreement, the case was thus referred to the European Data Protection Board (EDPB) which found that Meta was not entitled to rely on the contract legal basis for personal data processing and the delivery of behavioural advertising. The DPC stressed the necessity of clear communications about algorithmic processing, the significance of obtaining valid consent for profiling activities, and ordered Meta Ireland to comply within three months.
Similar to Luxembourg’s case against Amazon, the French Data Protection Authority CNIL (Commission nationale de l'informatique et des libertés) fined Google a more moderate €50 million in 2019 under the GDPR for the lack of a valid legal basis in the processing of personal data, especially with regards to ad personalisation.
Specifically, Google was found to have breached transparency and information obligations under Articles 12 and 13, where the CNIL found Google’s claim that it rightfully obtains user consent for data processing for ad personalisation to be invalid, and also observed that these violations are continuous breaches of the GDPR rather than a one-time infringement.
Moreover, the CNIL found that the option to consent to ad personalisation is pre-ticked, meaning individuals have to opt-out rather than opt-in. Users also have to agree to the entirety of Google’s Terms of Service and Privacy Policy before creating an account, de facto agreeing to all processing operations carried out by the company beyond just ad personalisation. This violates the GDPR provisions that state that consent has to be specific and given distinctly for each purpose.
In Italy, the Italian Data Protection Authority (Garante per la protezione dei dati personali) fined Clearview AI €20 million in 2022 under the GDPR due to breaching provisions on transparency, information obligations, data subject access rights, and having a representative in the EU (Articles 5 (1)(a), (b), and (e), 6, 9, 12 – 15, and 27).
An investigation into the company, which has also been subject to legal action in various jurisdictions including France and the UK, found that Clearview’s scraping of images from the internet to build databases of facial images to provide law enforcement with was non-consensual. As part of the action, Clearview was also banned from monitoring, storing, and processing biometric information of individuals in Italy, ordered to delete all existing data of people in Italy, and designate a representative in EU territory.
With a less hefty fine of €300,000, the Berlin Data Protection Authority BlnBDI (Berliner Beauftragte für Datenschutz und Informationsfreiheit) fined a Berlin-based bank in 2023 for not transparently informing a candidate of the reasons behind an automated rejection for their online credit card application. Without this specific information, the candidate could also not meaningfully challenge the decision.
As such, the BlnBDI found the bank to have violated Article 22(3), Article 5(1)(a), and Article 15(1)(h) of the GDPR, which cover automated individual decision-making, the lawful and transparent processing of personal data, and right of access by the data subject respectively.
With one of the lowest penalties for AI non-compliance in the EU, the Spanish Data Protection Authority AEPD (Agencia Española de Protección de Datos) imposed a fine in 2023 of €200,000 on GSMA, organisers of annual trade show Mobile World Congress, for imposing facial recognition on attendees for identity verification.
In contrast to the Clearview case where consent for biometric processing was not given, attendees did opt into the processing of their biometric information for identity verification, but the AEPD found that this consent did not have a valid legal basis as it was not freely given for a specific purpose. Indeed, attendees were asked to upload identify documents to facial recognition system BREEZ in order to attend the show, risking denial of entry if they did not, even if attending virtually.
Additionally, it was found that GSMA had failed to conduct an adequate Data Protection Impact Assessment (DPIA) prior to using this facial recognition system, violating Article 35 of the GDPR.
Similar to the EU, large sanctions have been placed on companies by the Information Commissioner’s Office (ICO) in the UK, the supervisory authority for data protection. Organisations have been charged for non-compliance with data protection and communications laws.
The first penalty issued by the ICO against AI was in 2020, when it they fined Scottish company CRDNN Ltd a record £500,000 for a breach of Regulations 19 and 24 of the Privacy and Electronic Communications Regulations (PECR), which is the maximum amount available for a breach of the PECR. They also issued CRDNN an enforcement notice to comply with the law.
The fines came as CRDNN made over 19 million unsolicited automated direct marketing calls in a span of just four months, and refused to facilitate requests from those who had opted out of future calls, showing that there was no proper consent. The large volume of automated calls and refusal to cooperate contributed towards the size of the penalty.
Two years later, in 2022, the ICO imposed another AI fine for an automated marketing tool, this time against Royal Mail. This came as the tool, Eloqua, was sending emails to customers who opted out of receiving them. This was found to be a violation of Regulation 22 of the PECR, which requires valid consent for marketing materials.
A relatively small fine compared to other penalties, there some were mitigating factors that were in Royal Mail’s favour. For example, they indicated a willingness to reform their marketing practices by undertaking a full data protection audit, and that they self-reported the incident to the ICO despite there being no statutory requirement to do so. Under the PECR, companies must implement rigorous checks within automated systems to ensure compliance and prevent non-consented communications.
More recently, in one of the biggest sanctions from the ICO, TikTok was fined £12.7 million in 2023 for processing personal data of children under 13 without proper consent. The social media platform was found to be using AI-driven profiling based on user interactions and demographics without providing clear and transparent information, violating UK GDPR regulations. Moreover, TikTok allowed over a million children under-13 access to its platform despite its own age rules, and was not transparent in the ways it processed personal data for the purpose of both its core services and targeted advertising, leading to this heavy fine.
While the United States does not have a federal equivalent of the GDPR or the EU AI act, it has still taken a number of actions against illegal AI tools under existing laws, with multiple regulators cracking down on illegal AI use.
One of the earliest examples of this crack down came from the Consumer Financial Protection Board (CFPB) in 2022 when it fined Hello Digit, a fintech company that promotes automated savings, $2.7 million in 2022 for using a faulty algorithm that caused overdrafts and penalties for its users.
The company was found to be violating the Consumer Financial Protection Act by engaging in deceptive acts and practices. Hello Digit misrepresented its tool by guaranteeing no overdrafts, promising reimbursements if there were overdrafts, and deceiving customers by pocketing earned interest when they said they wouldn’t. Along with the fine, Hello Digit was also ordered to pay out all the overdraft reimbursement requests they had denied.
This momentum continued into 2023, when, in August, the Equal Employment Opportunity Commission (EEOC) settled a lawsuit with iTutorGroup for $365,000 over AI-driven age discrimination, marking the first settlement against AI-powered recruitment tools in the US. This action was taken against the iTutorGroup for violating the Age Discrimination Act, which protects those aged 40 and over from age against discrimination, as it used using an algorithm that automatically rejected over 200 qualified applicants solely due to their age, with women over 55 and males over 60 automatically being disqualified from consideration by the system.
As part of the EEOC’s action, iTutorGroup will be prohibited from automatically rejecting tutors over 40 or anyone based on their sex, and will also be expected to comply with all relevant non-discrimination laws and cooperate with the EEOC to adopt the appropriate policies and practices to prevent unlawful discrimination in the future.
2023 also saw the first case of penalties brought against AI systems in the legal field, where two lawyers from the law firm Levidow, Levidow & Oberman, P.C., Steven Schwartz and Peter LoDuca, were fined a combined $5000 for using fake cases generated by ChatGPT. Schwartz used ChatGPT to create these non-existent cases which he then used to show precedent for lawsuits he was fighting. Schwartz claimed he had no ill intent to deceive the court or his clients and that he wasn’t aware that ChatGPT wasn’t a search engine, but a generative AI tool.
While they didn’t break an AI-specific law, the defendants were still fined for misusing an AI tool, and cases like these could set a new precedent for adjudication outside the legal framework.
Outside of the West, China has also started to crack down on AI misuse. Although the country hasn’t yet issued a large number of penalties issued for illegal AI tools, this is likely to change given the recent enactment of multiple laws relating to AI.
One of the penalties that the country has issued though, was to Didi, a ride-hailing company, in 2022. The Cyberspace Administration of China (CAC) fined the company 8.26 billion yuan (approximately USD 1.2 billion) for cybersecurity and data violations under the Cybersecurity Law, Data Security Law, and Personal Information Protection Law, due to offenses including the illegal collection of data from users’ mobile phone albums, the excessive collection of face recognition information, the unclear communications to users about personal information processing, and the unauthorised analysis of passenger travel information. This action was taken seriously by the CAC, with the app ordered to be removed from app stores.
The CAC also stressed accountability with this penalty, with the CEO and Chairman of Didi each subjected to an additional fine of 1 million yuan (approximately USD 148,000). These personal data laws also have also made it unlikely that Chinese companies are able to be listed on the US stock exchange, and Didi announced plans to withdraw from the US market and relist in Hong Kong in December 2022.
There is a clear need for ongoing risk management and compliance with the crackdown on the illegal use of AI under current laws. While compliance will undoubtably have financial implications for organizations, the cost of noncompliance is also getting higher, with several significant sanctions imposed on large organisations in various countries. As such, it will be far more costly to not comply than to comply.
Along with the upcoming wave of AI regulation, it is more important than ever to ensure compliance with both new and existing laws to avoid legal action and heavy penalties.
Schedule a call with a member of our specialist team to find out more about how Holistic AI’s Governance platform can help you maximize compliance and empower trust in AI.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts