Policymakers around the world are increasingly recognizing the importance of regulation and legislation to promote safety, fairness, and ethics in the use of AI tools.
While the US has made the most significant progress with vertical legislation that targets specific use cases — such as the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144 (aka the bias audit law), and Colorado’s insurtech law SB169 – Europe has made the most meaningful steps towards horizontal legislation that targets multiple use cases at once.
Indeed, the EU AI Act, with its risk-based approach, seeks to become the global gold standard for AI regulation. However, the European Commission is not the only authority making progress towards horizontal regulation – Canada, DC, California, and the U.S. (at the Federal level) have all proposed their own horizontal regulations targeting what lawmakers in each jurisdiction have identified as the most critical applications of AI.
Compared to their EU counterpart, these proposals have been less successful. The U.S. federal Algorithmic Accountability Act, for example, has now been introduced three times; first in 2019 by Representative Yvette Clarke, then in 2022 by Senator Ron Wyden, and again in 2023 by Representative Clarke.
With this said it’s clear that the US – at the local, state, and federal levels – is determined to impose more conditions on the use of algorithms and AI, which will soon see enterprises needing to navigate an influx of rules.
In this blog post, we’ll look at the history and potential future of the Algorithmic Accountability Act of 2023. Using the latest version of the bill as well as learnings from horizontal legislation from elsewhere in the world, we can extrapolate what horizontal legislation might look like in the US, and how organizations can begin to prepare.
Skip to:
Would the Algorithmic Accountability Act apply to my systems?
Would the Algorithmic Accountability Act apply to my organization?
What are the potential requirements of the Algorithmic Accountability Act?
De-risking horizontal AI regulation in the US in advance
What is the likelihood of horizontal AI regulation in the US?
The Algorithmic Accountability Act targets systems used to make critical decisions in what is known as an augmented critical decision process.
In short, it’s a system used to help you make a decision or judgment that has a legal, material, or otherwise significant impact on a consumer, stakeholder, or coworker's life.
If you’re using a system to dictate access to, availability of, or cost of any of the following, your system would likely be regulated by the proposed act.
To quote the proposed legislation:
“any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.”
It’s worth noting that passive computing infrastructure supporting regulated systems is intermediary technology to which the Act would not apply. Such supporting systems include web hosting tools, data storage, and cybersecurity.
If the Federal Trade Commission (FTC) has jurisdiction over your organization, the Algorithmic Accountability Act would apply.
Don’t know whether the FTC has jurisdiction over your organization? Under 5(a)(2) of the Federal Trade Commission Act their jurisdiction is established as:
Given inflation, the amounts specified above will be increased by the percentage increase in the consumer price index each fiscal year.
Importantly, this will apply to entities across the 50 states, in DC, and any territory of the U.S.
At this juncture, there are two categories of requirements organizations potentially regulated by the act should know about:
An algorithmic impact assessment is an ongoing evaluation of an automated decision system (ADS) or automated critical decision process that studies its impact on consumers. The act requires you to conduct impact assessments for ADSs that are used in augmented critical decision-making processes. These assessments should be done not only for the ADS themselves but also for the overall augmented decision processes. Importantly, impact assessments need to occur both before and following the deployment of these systems.
In terms of conducting the impact assessment, there are 11 key requirements:
Secondly, the Act would require covered entities to submit an annual summary report of the impact assessments to the FTC.
Summary reports will need to include the following:
Documentation related to the above must be maintained for at least 3 years after the tool is retired.
Interestingly, any of these requirements that were not feasible to be complied with must be documented, suggesting some leniency.
The proposed Act notes that certain documentation of assessments may only be possible at particular stages of development and deployment, allowing some flexibility in the process.
Finally, there are some requirements of the FTC, whether the Act passes or not.
If the Act does pass this time around, the FTC would be required to publicly publish a report summarizing information provided in the reports received. The bill would also require the FTC to establish a repository with a subset of the information about each ADS and augmented critical decision process for which summaries were received. This repository would be updated quarterly. The stated purpose of this is to provide consumers with greater information about how decisions are made about them, as well as to allow researchers to study the use of these systems.
To support compliance, the FTC would also be required to publish guidelines on how the requirements of the impact assessment could be met, including resources developed by the National Institute of Standards and Technology (NIST).
Further, the FTC would provide training materials to support the determination of whether entities are covered by the law and update such guidance and training materials in line with feedback or common questions.
Organizations monitored by the FTC should be aware that whether the Act passes or not, the FTC has signaled a more active role in enforcing safe and ethical AI through a joint statement that existing laws may also be used to regulate AI use.
While the specific filings required from the Act may or may not come to pass, AI users and vendors should keep in mind that systems that would meet the criteria of supporting critical decisions are rife with regulatory risk from existing laws monitored by a variety of agencies.
With this said the process-centered auditing required in the Act is likely best practice whether the act passes or not. In particular:
The Algorithmic Accountability Act of 2019 and Algorithmic Accountability Act of 2022 failed to make it out of the 116th and 117th Congresses, respectively. This signaled that Congress was reluctant to proceed with passing its own algorithm law.
With the EU AI Act expected to dominate global discourse once finalized, it could be the case that the US will adopt the EU rules and introduce its own equivalent, eventually replacing proposals for an Algorithmic Accountability Act.
Indeed, the Algorithmic Accountability Act is much less mature and comprehensive than the EU AI Act, so may not be adequate to make algorithms safer and fairer alone, particularly without considering how the law will interact with other existing laws, including those targeting automated systems, such as New York City Local Law 144.
Only time will tell whether this third attempt will be successful, but it is clear that the US – at the local, state, and federal levels – is determined to impose more conditions on the use of algorithms and AI, which will soon see enterprises needing to navigate an influx of rules.
Preparing early is the best way to ensure compliance. Future-proof your organization with Holistic AI.
Schedule a call with a member of our specialist governance, risk management, and compliance team to find out more.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts