Artificial intelligence’s (AI) integration in business is expanding across the globe, with over a third of companies using AI and an additional 42% exploring how it can be used to support business practices. For large companies like Fortune 500, this number extends to 99%.
While this can bring considerable benefits to both businesses and consumers by removing the burden of tedious and repetitive tasks, streamlining processes, and allowing greater personalization, the widespread use of AI comes with risks, particularly if appropriate safeguards are not implemented.
As such, AI is being increasingly targeted by lawmakers around the world, including in the US at the state, federal, and local levels. The same is true in the EU, where the EU AI Act is set to become the global gold standard for AI regulation through its risk-based approach. Laws have also been proposed to regulate AI and automation on a sectorial basis, in industries like HR Tech and Insurance targeted.
Regardless of new AI-specific laws, it is important to recognize that AI systems are still within the scope of existing laws – automation does not create a loophole for compliance.
This has been repeatedly reiterated by regulators, including the Equal Employment Opportunity Commission (EEOC), Financial Conduct Authority (FCA), Federal Trade Commission (FTC), and Consumer Financial Protection Bureau (CFPB). Many lawsuits have already been brought against companies using AI without appropriate safeguards or considerations, resulting in them breaking existing laws.
In this blog post, we explore how some of these existing laws have been enforced against the misuse of AI.
One of the most widely acknowledged risks of AI is bias. Bias can be introduced by multiple sources, including system design and training data. Many of these biases reflect existing societal prejudices, which are reflected and perpetuated by AI systems.
However, unlike human biases, which are notoriously difficult to alleviate, there is the potential for the bias in AI models to be mitigated through both social and technical approaches. However, there have been multiple instances of algorithmic discrimination occurring across sectors, resulting in several high-profile harms and lawsuits.
AI application types most at risk of bias claims:
Let’s take a deeper dive into use cases and industries that have already been met with lawsuits around anti-discrimination based on existing laws.
In the HR Tech sector, a lawsuit was brought against ATS provider Workday for alleged age, disability, and racial discrimination, which violates Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866, the Age Discrimination in Employment Act of 1967, and the ADA Amendments Act of 2008.
Plaintiff Derek Mobley, a black disabled man over 40, claims that he has applied for up to 100 jobs at companies that he believes use Workday and has not had any success in obtaining a position despite having a bachelor's degree in finance and an associate degree in network systems administration.
As such, Mobley filed the lawsuit 4:23-cv-00770 on behalf of himself and others in similar situations, namely African American applicants, candidates over 40, and disabled applicants. The lawsuit addresses the alleged discriminatory screening process that, from 3 June 2019 until present, has prevented these individuals from being referred or permanently hired for employment.
The case is still open and could set a precedent for AI-based discrimination once resolved. Meanwhile, the EEOC recently settled an age discrimination lawsuit with iTutorGroup for $365,000 for automatic rejection of applicants based on age.
Outside of employment decisions, a discrimination lawsuit has also been brought against insurance provider State Farm on the basis that the company discriminates against black policyholders, therefore violating the Fair Housing Act 42 U.S. Code § 3604 (a)(b) and 42 U.S. Code § 3605.
The class action 1:22-cv-07014 was filed by State Farm policyholder Jacqueline Huskey and is supported by a study from the Centre on Race, Inequality and the Law at the NYU School of Law. The study surveyed 800 black and white homeowners and found disparities between the way claims from white policyholders and black policyholders are met.
Black policyholders experienced prolonged delays in communication with State Farm agents, having more correspondence compared to other policyholders. Additionally, their claims were met with increased suspicion in contrast to their white counterparts.
The lawsuit alleges that this disparate treatment is the result of the algorithms and tools State Farm deploys from third-party vendors to automate their claims processing. In particular, the lawsuit identifies Duck Creek Technologies, a provider of claims management and fraud-detection tools, as a potential source of the alleged discrimination. The use of natural language processing is alleged to have resulted in negative biases in voice analytics for black vs white policyholders.
Like the Workday lawsuit, the State Farm lawsuit is still open, but these cases highlight the fact that existing non-discrimination laws can and will be applied to AI and automated decision systems.
In addition to AI system outcomes falling under existing legal regulations, the data used to train these models is also within the legal framework. Consequently, multiple lawsuits have been initiated against companies that unlawfully use biometric data concerning their AI systems.
Highest-risk AI application types regarding biometric and data protection laws:
• Facial Recognition Systems:
• Fingerprint and Iris Scanning:
• Voice Recognition and Analysis:
• Health Data Analysis:
• Emotion Recognition:
• Gait Analysis:
• AI-Powered Personal Assistants:
• Behavioral Prediction:
Let’s take a deeper dive into use cases and industries that have already been met with lawsuits around biometric and data protection laws and regulations.
One company that has been subject to legal action in multiple countries is Clearview AI, which uses images scraped from the internet and social media to build a database of facial images, which they then provide to law enforcement.
Since the company did not inform individuals that they were collecting facial images or outline any storage period, this violated data protection laws in multiple countries. For example, Italy’s Security Agency (Garante per la Protezione dei Dati Peronsali) fined the company €20 million under GDPR and banned them from monitoring, storing, and processing biometric information of individuals in Italy and ordered the company to delete all existing data belonging to Italians. Similar action was brought against the company in Illinois by the American Civil Liberties Union for violating Illinois’ Biometric Information Privacy Act (BIPA).
Also in Illinois, a case was brought against Prisma Labs Inc. by Jack Flora, Nathan Matson, Courtney Owens, and D.J. for failing to declare the collection and storage of biometric data on facial geometry.
Prisma Labs develops mobile apps for editing and stylizing digital images and videos. Their Lensa app is designed for retouching facial images. To train the algorithms used by the app, Prisma collects the facial geometry of uploaded images. The plaintiffs claim that Prisma has not informed users in writing that this biometric data is collected and stored by Lensa and that language used in the privacy policy is too vague and does not clearly disclose the collection and storage of data. As such, the lawsuit 3:23-cv-00680 asserts that Prisma’s lack of disclosure violates BIPA (section 15(a) to 15(d)) and consequent damages of up to $5 million are being sought.
In the insurance sector, Lemonade Inc. has had a lawsuit brought against it for the unlawful collection of data points from policyholders, particularly in relation to facial recognition. This is because Lemonade Inc. uses AI chatbots for many of its insurance processes, extracting 1600 data points through 13 questions.
Although Lemonade’s Privacy Pledge claims that the company does not collect, require, or share policyholders’ biometric information, a now-deleted tweet from the company claimed that their AI technology can extract non-verbal cues from videos submitted as part of claims evidence, with the company, therefore, relying on facial recognition for fraud detection. As such, claimant Mark Pruden brought a case against Lemonade for violation of New York’s Deceptive Trade Practices Act, with the lawsuit 1:21-cv-07070 being settled in 2022 for $4 million in damages.
Finally, the proliferation of generative AI in the past year has resulted in many lawsuits against the developers of these tools using vast amounts of data to train complex models.
• Content Generation and Repurposing:
• Automated News Aggregators:
• Deepfakes and Synthetic Media:
• AI in Music Composition:
• AI-driven Art and Graphic Design:
• Machine Learning Models Trained on Copyrighted Data:
• Automated Video or Audio Editing Software:
• Text and Data Mining Tools:
Let’s take a deeper dive into use cases and industries that have already been met with lawsuits around copyright protection laws and regulations.
For example, ChatGPT developers OpenAI have been involved in several lawsuits due to claims of copyright infringement in the training of these models. Most recently, Authors Guild filed a lawsuit claiming that OpenAI used their works of fiction to train their AI models without permission or compensation. A similar lawsuit was also brought against OpenAI earlier in 2023 by Paul Tremblay and Mona Awad, who also assert that OpenAI used their books to train ChatGPT without their permission, thus violating copyright laws.
OpenAI is not the only provider of generative AI models to be targeted by legal action. Stability AI, the developer of AI image generator Stable Diffusion, has also been subject to copyright lawsuits. For example, Getty Images has filed a complaint against Stability AI for using more than 12 million copyrighted Getty photos without permission or compensation. Similarly, California resident Sarah Andersen, author of a webcomic, has, alongside other fellow artists, sued Stability AI over their use of copyrighted images to train the generative models.
Conversely, a DC court has ruled that outputs generated by artificial intelligence (AI) systems cannot be granted copyright protection, reserving this protection solely for works produced by humans. Accordingly, a copyright application from computer scientist Stephen Thaler for his Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) system was rejected by the Copyright Office. Likewise, applications for copyrights on AI-generated artworks have also been rejected in the US.
Courts and governmental agencies are increasingly cracking down on the illegal use of AI under current laws, emphasizing the need for ongoing risk management and compliance when using the technology.
With the wave of upcoming AI regulation, it is more important than ever to ensure compliance with both new and existing laws to avoid legal action and heavy penalties – up to 7% of annual worldwide turnover in the case of the EU AI Act, for example.
Don't wait until it's too late—take proactive steps to ensure your AI strategies align with evolving regulations.
Schedule a demo to find out how Holistic AI's Governance Platform can help you to prepare for the upcoming AI legislations.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts