Do Existing Laws Apply to AI? The AI Applications Most at Risk

January 12, 2024
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Do Existing Laws Apply to AI? The AI Applications Most at Risk

Artificial intelligence’s (AI) integration in business is expanding across the globe, with over a third of companies using AI and an additional 42% exploring how it can be used to support business practices. For large companies like Fortune 500, this number extends to 99%.

While this can bring considerable benefits to both businesses and consumers by removing the burden of tedious and repetitive tasks, streamlining processes, and allowing greater personalization, the widespread use of AI comes with risks, particularly if appropriate safeguards are not implemented.

As such, AI is being increasingly targeted by lawmakers around the world, including in the US at the state, federal, and local levels. The same is true in the EU, where the EU AI Act is set to become the global gold standard for AI regulation through its risk-based approach. Laws have also been proposed to regulate AI and automation on a sectorial basis, in industries like HR Tech and Insurance targeted.

Regardless of new AI-specific laws, it is important to recognize that AI systems are still within the scope of existing laws – automation does not create a loophole for compliance.

This has been repeatedly reiterated by regulators, including the Equal Employment Opportunity Commission (EEOC), Financial Conduct Authority (FCA), Federal Trade Commission (FTC), and Consumer Financial Protection Bureau (CFPB). Many lawsuits have already been brought against companies using AI without appropriate safeguards or considerations, resulting in them breaking existing laws.

In this blog post, we explore how some of these existing laws have been enforced against the misuse of AI.

Existing non-discrimination laws enforced against AI

One of the most widely acknowledged risks of AI is bias. Bias can be introduced by multiple sources, including system design and training data. Many of these biases reflect existing societal prejudices, which are reflected and perpetuated by AI systems.

However, unlike human biases, which are notoriously difficult to alleviate, there is the potential for the bias in AI models to be mitigated through both social and technical approaches. However, there have been multiple instances of algorithmic discrimination occurring across sectors, resulting in several high-profile harms and lawsuits.

AI application types most at risk of bias claims:

  • Employment and Hiring: AI tools used for resume screening, candidate evaluation, and job matching can inadvertently perpetuate biases based on gender, race, age, or other protected characteristics. If these tools disproportionately screen out certain groups, they can violate anti-discrimination laws like the U.S. Equal Employment Opportunity Commission (EEOC) guidelines.
  • Credit and Lending: AI algorithms in financial services, such as those determining creditworthiness or loan eligibility, are at risk. Biases in these systems can lead to claims under laws like the Equal Credit Opportunity Act (ECOA), especially if they result in unfair treatment based on race, gender, or other factors.
  • Healthcare: AI applications used for patient diagnosis, treatment recommendations, or resource allocation can be subject to bias. If these systems provide differential treatment based on race, gender, or age, they could violate anti-discrimination laws and ethical standards in healthcare.
  • Housing: AI tools used in real estate for property valuations, rental application processing, or mortgage lending can be scrutinized under the Fair Housing Act. Biases in these applications can lead to unequal treatment of individuals based on race, nationality, or other protected categories.
  • Criminal Justice: AI systems used in predictive policing, bail setting, or sentencing risk assessments can be prone to bias. These systems, if biased, can disproportionately affect certain racial or ethnic groups, leading to claims under anti-discrimination laws.
  • Education: AI-driven tools for student admissions, grading, or educational resource allocation can also be at risk. Biases in these systems could lead to unequal educational opportunities based on protected characteristics.
  • Insurance Underwriting: AI in insurance risk assessments can lead to biased outcomes. If certain groups are systematically disadvantaged based on race, gender, or other factors, these systems could face legal challenges.
  • Advertising and Marketing: AI algorithms that target advertisements based on user profiling can inadvertently lead to discriminatory practices, such as excluding certain demographics from seeing job, housing, or credit ads.

Let’s take a deeper dive into use cases and industries that have already been met with lawsuits around anti-discrimination based on existing laws.

In the HR Tech sector, a lawsuit was brought against ATS provider Workday for alleged age, disability, and racial discrimination, which violates Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866, the Age Discrimination in Employment Act of 1967, and the ADA Amendments Act of 2008.

Plaintiff Derek Mobley, a black disabled man over 40, claims that he has applied for up to 100 jobs at companies that he believes use Workday and has not had any success in obtaining a position despite having a bachelor's degree in finance and an associate degree in network systems administration.

As such, Mobley filed the lawsuit 4:23-cv-00770 on behalf of himself and others in similar situations, namely African American applicants, candidates over 40, and disabled applicants. The lawsuit addresses the alleged discriminatory screening process that, from 3 June 2019 until present, has prevented these individuals from being referred or permanently hired for employment.

The case is still open and could set a precedent for AI-based discrimination once resolved. Meanwhile, the EEOC recently settled an age discrimination lawsuit with iTutorGroup for $365,000 for automatic rejection of applicants based on age.

Outside of employment decisions, a discrimination lawsuit has also been brought against insurance provider State Farm on the basis that the company discriminates against black policyholders, therefore violating the Fair Housing Act 42 U.S. Code § 3604 (a)(b) and 42 U.S. Code § 3605.

The class action 1:22-cv-07014 was filed by State Farm policyholder Jacqueline Huskey and is supported by a study from the Centre on Race, Inequality and the Law at the NYU School of Law. The study surveyed 800 black and white homeowners and found disparities between the way claims from white policyholders and black policyholders are met.

Black policyholders experienced prolonged delays in communication with State Farm agents, having more correspondence compared to other policyholders. Additionally, their claims were met with increased suspicion in contrast to their white counterparts.

The lawsuit alleges that this disparate treatment is the result of the algorithms and tools State Farm deploys from third-party vendors to automate their claims processing. In particular, the lawsuit identifies Duck Creek Technologies, a provider of claims management and fraud-detection tools, as a potential source of the alleged discrimination. The use of natural language processing is alleged to have resulted in negative biases in voice analytics for black vs white policyholders.

Like the Workday lawsuit, the State Farm lawsuit is still open, but these cases highlight the fact that existing non-discrimination laws can and will be applied to AI and automated decision systems.

Lawsuits brought against AI under existing biometric and data protection laws

In addition to AI system outcomes falling under existing legal regulations, the data used to train these models is also within the legal framework. Consequently, multiple lawsuits have been initiated against companies that unlawfully use biometric data concerning their AI systems.

Highest-risk AI application types regarding biometric and data protection laws:

• Facial Recognition Systems:

  • Risk Factors: Use in surveillance, security, and identity verification.
  • Regulatory Concerns: Potential violation of privacy rights and consent requirements under laws like the General Data Protection Regulation (GDPR) in the EU, Biometric Information Privacy Act (BIPA) in Illinois, USA, and other similar regulations globally.

• Fingerprint and Iris Scanning:

  • Risk Factors: Used in access control systems, time and attendance tracking, and law enforcement.
  • Regulatory Concerns: Issues with consent, data minimization, and storage limitations under data protection laws.

• Voice Recognition and Analysis:

  • Risk Factors: Deployment in customer service, security systems, and virtual assistants.
  • Regulatory Concerns: Concerns around consent, data retention, and purpose limitation, especially under GDPR and similar privacy laws.

• Health Data Analysis:

  • Risk Factors: Use in predictive healthcare, personalized medicine, and patient monitoring.
  • Regulatory Concerns: Compliance with the Health Insurance Portability and Accountability Act (HIPAA) in the USA, GDPR in the EU for handling sensitive health data, and other health data protection regulations.

• Emotion Recognition:

  • Risk Factors: Applications in marketing research, user experience design, and surveillance.
  • Regulatory Concerns: Privacy issues, especially related to consent and legitimate interest under data protection laws.

• Gait Analysis:

  • Risk Factors: Use in surveillance, sports analytics, and healthcare.
  • Regulatory Concerns: Potential privacy infringements and the need for explicit consent under various data protection laws.

• AI-Powered Personal Assistants:

  • Risk Factors: Collection and processing of personal data for personalized interactions.
  • Regulatory Concerns: Compliance with user consent, data minimization, and transparency requirements under laws like GDPR.

• Behavioral Prediction:

  • Risk Factors: Use in advertising, e-commerce, and predictive policing.
  • Regulatory Concerns: Issues related to profiling, consent, and the right to explanation under data protection regulations.

Let’s take a deeper dive into use cases and industries that have already been met with lawsuits around biometric and data protection laws and regulations.

One company that has been subject to legal action in multiple countries is Clearview AI, which uses images scraped from the internet and social media to build a database of facial images, which they then provide to law enforcement.

Since the company did not inform individuals that they were collecting facial images or outline any storage period, this violated data protection laws in multiple countries. For example, Italy’s Security Agency (Garante per la Protezione dei Dati Peronsali) fined the company €20 million under GDPR and banned them from monitoring, storing, and processing biometric information of individuals in Italy and ordered the company to delete all existing data belonging to Italians. Similar action was brought against the company in Illinois by the American Civil Liberties Union for violating Illinois’ Biometric Information Privacy Act (BIPA).

Also in Illinois, a case was brought against Prisma Labs Inc. by Jack Flora, Nathan Matson, Courtney Owens, and D.J. for failing to declare the collection and storage of biometric data on facial geometry.

Prisma Labs develops mobile apps for editing and stylizing digital images and videos. Their Lensa app is designed for retouching facial images. To train the algorithms used by the app, Prisma collects the facial geometry of uploaded images. The plaintiffs claim that Prisma has not informed users in writing that this biometric data is collected and stored by Lensa and that language used in the privacy policy is too vague and does not clearly disclose the collection and storage of data. As such, the lawsuit 3:23-cv-00680 asserts that Prisma’s lack of disclosure violates BIPA (section 15(a) to 15(d)) and consequent damages of up to $5 million are being sought.

In the insurance sector, Lemonade Inc. has had a lawsuit brought against it for the unlawful collection of data points from policyholders, particularly in relation to facial recognition. This is because Lemonade Inc. uses AI chatbots for many of its insurance processes, extracting 1600 data points through 13 questions.

Although Lemonade’s Privacy Pledge claims that the company does not collect, require, or share policyholders’ biometric information, a now-deleted tweet from the company claimed that their AI technology can extract non-verbal cues from videos submitted as part of claims evidence, with the company, therefore, relying on facial recognition for fraud detection. As such, claimant Mark Pruden brought a case against Lemonade for violation of New York’s Deceptive Trade Practices Act, with the lawsuit 1:21-cv-07070 being settled in 2022 for $4 million in damages.

Lawsuits brought against AI for copyright infringements

Finally, the proliferation of generative AI in the past year has resulted in many lawsuits against the developers of these tools using vast amounts of data to train complex models.

• Content Generation and Repurposing:

  • Risk Factors: AI tools that generate text, images, music, or videos, potentially replicating the style or substance of copyrighted works.
  • Regulatory Concerns: Violation of copyright laws if AI-generated content closely resembles existing copyrighted material without permission.

• Automated News Aggregators:

  • Risk Factors: Use of algorithms to compile news from various sources, potentially reproducing copyrighted articles or excerpts.
  • Regulatory Concerns: Potential infringement if content is reproduced without proper licensing or falls outside fair use exceptions.

• Deepfakes and Synthetic Media:

  • Risk Factors: Creation of realistic fake videos or audio recordings that might use copyrighted images, videos, or voice recordings.
  • Regulatory Concerns: Copyright infringement if deepfakes utilize copyrighted elements without authorization.

• AI in Music Composition:

  • Risk Factors: AI algorithms that compose music, which might inadvertently replicate existing copyrighted melodies or harmonies.
  • Regulatory Concerns: Infringement risks if AI-composed music is not sufficiently original or borrows heavily from copyrighted works.

• AI-driven Art and Graphic Design:

  • Risk Factors: AI tools that create artworks or designs, potentially mimicking the style or elements of existing copyrighted works.
  • Regulatory Concerns: Possible infringement if AI-generated art is derivative of copyrighted material.

• Machine Learning Models Trained on Copyrighted Data:

  • Risk Factors: Use of copyrighted text, images, or other data to train machine learning models without proper licensing.
  • Regulatory Concerns: Infringement if the training process involves making unauthorized copies of copyrighted material.

• Automated Video or Audio Editing Software:

  • Risk Factors: Software that edits or modifies existing video or audio content, potentially using copyrighted material.
  • Regulatory Concerns: Risks of infringement if the editing process creates derivative works from copyrighted content without permission.

• Text and Data Mining Tools:

  • Risk Factors: Tools that analyze large datasets, which may include copyrighted texts or publications.
  • Regulatory Concerns: Potential infringement if the mining process involves reproducing copyrighted material.

Let’s take a deeper dive into use cases and industries that have already been met with lawsuits around copyright protection laws and regulations.

For example, ChatGPT developers OpenAI have been involved in several lawsuits due to claims of copyright infringement in the training of these models. Most recently, Authors Guild filed a lawsuit claiming that OpenAI used their works of fiction to train their AI models without permission or compensation. A similar lawsuit was also brought against OpenAI earlier in 2023 by Paul Tremblay and Mona Awad, who also assert that OpenAI used their books to train ChatGPT without their permission, thus violating copyright laws.

OpenAI is not the only provider of generative AI models to be targeted by legal action. Stability AI, the developer of AI image generator Stable Diffusion, has also been subject to copyright lawsuits. For example, Getty Images has filed a complaint against Stability AI for using more than 12 million copyrighted Getty photos without permission or compensation. Similarly, California resident Sarah Andersen, author of a webcomic, has, alongside other fellow artists, sued Stability AI over their use of copyrighted images to train the generative models.

Conversely, a DC court has ruled that outputs generated by artificial intelligence (AI) systems cannot be granted copyright protection, reserving this protection solely for works produced by humans. Accordingly, a copyright application from computer scientist Stephen Thaler for his Device for the Autonomous Bootstrapping of Unified Sentience (DABUS) system was rejected by the Copyright Office. Likewise, applications for copyrights on AI-generated artworks have also been rejected in the US.

Prioritize compliance

Courts and governmental agencies are increasingly cracking down on the illegal use of AI under current laws, emphasizing the need for ongoing risk management and compliance when using the technology.

With the wave of upcoming AI regulation, it is more important than ever to ensure compliance with both new and existing laws to avoid legal action and heavy penalties – up to 7% of annual worldwide turnover in the case of the EU AI Act, for example.

To find out how Holistic AI can help you with AI governance, risk, and compliance, get in touch at we@holisticai.com.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call