🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

How is the FTC Regulating AI?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Sep 22, 2023
read time
0
min read
share this
How is the FTC Regulating AI?

Across the US, there is a clear impetus to regulate AI, with initiatives being proposed at the federal, state, and local levels. While most of the action is coming from policy makers, with several laws being proposed to regulate the use of AI across the US, regulators are also signaling that the regulation of AI is becoming an increasing priority.

One of these regulators is the Federal Trade Commission (FTC), which is responsible for enforcing civil antitrust law and protecting consumers. The FTC is becoming increasingly vocal about the risks of AI to consumers and competition, and the organisation has reiterated that existing laws also apply to AI. In this blog post, we provide an overview of the key signals that the FTC has given on its plans to regulate the technology.

Warning against the use of biased AI

In April 2021, the FTC published a blog warning against the use of biased or discriminatory AI practices following a report on big data and machine learning, a two-day hearing on algorithms and AI, and the issuing of guidance on AI and algorithms.

In particular, the blog post draws attention to section 5 of the FTC Act, which prohibits unfair or deceptive practices, including the use of biased algorithms. Zeroing in on the use of AI in credit specifically, the blog also highlights the applicability of the Fair Credit Reporting Act, which concerns non-discrimination in employment, housing, credit, insurance, and other types of benefits, and the Equal Credit Opportunity Act, which prohibits discrimination based on race, colour, religion, national origin, sex, marital status, age, or the receipt of public assistance.

To prevent unfair outcomes, the blog post urges companies using AI to ensure that training data for algorithms is representative and high-quality, as well as actively identifying and mitigating biased outcomes from the use of algorithms. Additionally, the blog post calls for transparency of the use of algorithms and independent standards, as well as ensuring that claims made about the capabilities of AI systems are not exaggerated. The FTC also highlights the importance of disclosure about how data is used to train AI models and clear accountability for algorithm performance and outputs.

Report on online harms

In June 2022, the FTC issued a report to Congress titled Combatting Online Harms Through Innovation as part of its obligations of the 2021 Appropriations Act, which directed the FTC to study and report on how AI could be used to identify or mitigate a variety of online harms, with a particular focus on issues such as scams, deep fakes, hate crimes, or harassment. The report highlighted the fact that these issues on social media are caused by the technology underpinning the platforms, including AI, meaning that AI may be mitigating issues caused by AI in the first place.  As such, the report recommends that human input is still needed alongside AI tools in the monitoring and removal of harmful content, and that human efforts should be supported by adequate training and resources.

Among several other recommendations, the paper also emphasised the need for AI transparency, including ensuring that AI used for content moderation is explainable and contestable. Connected to this is the need to ensure accountability for the use of AI to identify and remove harmful content, particularly in terms of their data practices and the results of the use of the tool and responsibility from the developers of the tools for their inputs and outputs.

Warning regarding claims about AI in advertising

Noting that artificial intelligence can be an ambiguous term with no universally accepted definition, a blog published by the FTC in February 2023 serves as a reminder to businesses to ensure that they do not unrealistically exaggerate the capabilities and applications of AI systems in their advertisements or provide claims unsubstantiated by evidence.

Interestingly, the blog also encourages businesses to clarify whether they are actually using AI or not, reiterating that a tool developed using AI is not equivalent to the tool having AI itself. Finally, the blog emphasises that a lack of transparency in terms of how the tool works or lack of understanding of the tool is not a valid reason for not reducing and preventing foreseeable risks, highlighting the importance of implementing a risk management framework and maximising transparency.

Joint statement on AI and automated systems

On 25 April 2023, a joint statement from the Equal Employment Opportunity Commission (EEOC), Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (DOJ), and the Federal Trade Commission (FTC) on artificial intelligence (AI) and automated systems was announced. Expressing a commitment from the four bodies to ensure that AI does not violate the core principles of fairness, equality, and justice that are embedded in US federal laws, the statement converges with previous FTC publications, reiterating that existing laws apply to AI and will be duly enforced.

The key role of regulators

Following this, in May 2023, Lina Khan, chair of the FTC, published an opinion essay with the New York Times urging action to regulate AI, particularly in light of the widespread adoption of generative AI since the launch of ChatGPT.

Khan reiterated the FTC’s commitment to promoting fair competition while also protecting consumers from unfair or deceptive practices by enforcing existing rules, reiterating the message from their warning against the use of biased AI, as well as directly strengthening it by warning against the use of biased training data.

The essay also highlights the role of regulators in preventing a small number of firms significantly dominating the market through their access to resources such as computing power and data, hindering competition, a message that UK’s House of Commons Committee on Science, Innovation and Technology has also echoed.

Addressing generative AI and competition concerns

Competition concerns, along with bias, are emerging as a key area of focus of the FTC, with the FTC’s Bureau of Competition and Office of Technology publishing a joint blog in June 2023 highlighting how generative AI is causing competition concerns.

In particular, access to the large datasets required to train these models as well as the necessary technical expertise could see a handful of companies monopolise the market, which could dampen competition and therefore impede innovation. By contrast, healthy competition would help the maximum benefit of these technologies to be realised.

FTC investigation into generative AI

After raising concerns about the impact of generative AI on the market, it was announced in July 2023 that the FTC was launching an investigation into maker of ChatGTP OpenAI due to consumer protection concerns, with the FTC sending OpenAI a demand for records on how the risks of its AI models are managed and details of complaints about the chatbot making false claims about individuals.

In particular, the investigation seeks to establish whether OpenAI engaged in unfair or deceptive practices that in turn resulted in reputational harm to consumers, as well as whether data was leaked following issues in March 2023 with the chatbot showing the conversation history of others as well as payment information.

While the investigation is still ongoing, it serves as an important reminder that the FTC, and other regulatory bodies, can and will enforce existing regulations for the misuse of AI.

Ongoing compliance is key

With the use of AI proliferating around the world and increasingly being used in high-risk applications such as credit scoring, insurance, and employment decisions, ongoing compliance and risk management are essential to protect against preventable harms and ensure that AI is an asset, not a liability.

Schedule a call with our experts to find out how Holistic AI can help you with AI Governance, Risk, and Compliance.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo