On 5 June 2023, U.S. Representative Ritchie Torres of New York's 15th Congressional District announced the AI Disclosure Act of 2023, a federal bill that seeks to create greater transparency around the use of generative AI.
With only a single requirement, the bill calls for any outputs generated by artificial intelligence to be accompanied by the following disclaimer: ‘‘Disclaimer: this output has been generated by artificial intelligence.’’
While neither generative artificial intelligence nor artificial intelligence in general is defined in the text of the Act, the statement on behalf of Torres indicates that the law would apply to any videos, photos, text, audio, or other materials generated by AI. This would include text generated by large language models such as ChatGPT.
Those using generative AI without this disclaimer would violate section 18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)), which prohibits unfair or deceptive acts. Accordingly, the Federal Trade Commission would enforce the Act, meaning that entities suspected of non-compliance would be subject to the same penalties, privileges, and immunities as under the Federal Trade Commission Act.
Although the scope of the AI Disclosure Act is not explicitly outlined in the text, Section 5 of the Federal Trade Commission Act applies to any entity engaged in commerce, including banks, meaning the Act likely has the same scope.
Although brief, the AI Disclosure Act represents an important step towards algorithmic transparency in that those interacting with AI systems and AI-generated outputs will be more informed about what they are interacting with, allowing them to make decisions accordingly.
However, this Act is not the first initiative attempting to increase algorithmic transparency. The Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144, and Maryland’s HB1202 prohibition of facial recognition services – all in the HR Tech space – also require disclosure of the use of AI and automated systems deployed to make employment decisions.
The EU AI Act also imposes transparency requirements for AI systems used in the EU. Taking a risk-based approach to regulation, the EU AI Act categorises systems as having minimal risk, limited risk, high risk, or an unacceptable level of risk, imposing transparency requirements for both the limited risk and high-risk categories. In the latest version of the text, this includes biometric and biometrics-based systems; systems used in the management and operation of critical infrastructure; education and vocational training; employment, worker management, and access to self-employment; access to and enjoyment of essential private services and public services and benefits; migration, asylum, and border control management; the administration of justice and democratic processes; systems used by law enforcement. It will also apply to systems that interact with humans, emotion recognition or biometric categorisation systems, or systems that produce generated or manipulated content.
There is a wave of regulation that will soon have significant implications for AI systems, including those that generate content. Getting prepared early by establishing the appropriate notification and disclosure procedures is the best way to ensure compliance with transparency requirements. To find out how Holistic AI can help you get compliant, get in touch at we@holisticiai.com.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts