🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Requirements for ‘High-Risk' AI Applications: Comparing AI Regulations in the EU, US, and Canada

Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Published on
Jun 7, 2023
read time
0
min read
share this
Requirements for ‘High-Risk' AI Applications: Comparing AI Regulations in the EU, US, and Canada

Artificial Intelligence (AI) has emerged as a transformative force, revolutionising industries and societies worldwide. However, along with its potential for positive impact, AI also poses significant risks that necessitate robust regulation. As a result, Governments across the globe have actively ramped up efforts to ensure the responsible development and deployment of AI systems. In this blog, we will provide an in-depth overview of AI regulations in three key regions: the European Union (EU), the United States, and Canada.

An overview of high-risk AI application requirements

We will primarily concentrate on the requirements imposed on “High-Risk Systems,” highlighting the sectors in which these systems are applicable, and the regulatory mechanisms implemented to address potential harm, ensure transparency, and protect fundamental rights. It is important to note that AI systems are defined differently by various regulations, and we refer to this emerging group of AI applications as ‘High Risk’ in a broad sense.

Key takeaways:

  1. The EU AI Act establishes a risk-based regulatory framework for AI systems, classifying them into different risk categories and imposing corresponding obligations. In these, it prohibits AI systems with unacceptable risks, and places stringent assessment, transparency, and risk management obligations on High-Risk AI Systems. (HRAIS).
  1. The Algorithmic Accountability Act (AAA) in the United States mandates companies to identify and resolve AI biases, focusing on Automated Decision Systems (ADS) used for Critical Decisions. (ACDPs).
  1. The Stop Discrimination by Algorithms Act (SDAA) in Washington DC prohibits the use of algorithms that make decisions based on protected characteristics and includes audits and transparency requirements.
  1. Assembly Bill 331 in California seeks to regulate automated decision tools that contribute to algorithmic discrimination, with obligations for both developers and deployers of such tools.
  1. The Artificial Intelligence and Data Act (AIDA) in Canada aims to establish a risk-based regulatory approach for AI systems, focusing on high-impact systems that may adversely affect human rights or pose risks of harm.

The AI Act, European Union

Seeking to establish global leadership on governing Artificial Intelligence, the EU AI Act lays down a risk-based regulatory framework where AI systems are classed into having low or minimal risk, limited risk, high risk, or unacceptable risk, with obligations proportional to the level of risk posed. While AIs with unacceptable risks are prohibited, the legislation places stringent obligations on High-Risk AI Systems (HRAIS), transparency requirements on systems with limited risk, and no obligations on systems with minimal risks.

A system is considered a HRAIS if it is covered under Annex III of the EU AI Act, and poses significant risks to harm an individual’s health, safety, or fundamental rights. The second sufficiency condition was recently added in the Act’s latest compromise text, which was adopted by leading committees in the European Parliament on 11 May 2023 [update: The text was passed by the European Parliament on 14 June 2024]. Interestingly, the Act allows providers who deem that their system does not pose significant risks to notify supervisory authorities, who then have three months to review and object.

There are 8 broad use-cases that are considered High-Risk in the EU AI Act, which are:

8 Broad Use-Cases - High-Risk Applications

Requirements for HRAIS:

Assessment Risk Management Transparency Reporting, Documentation and Notification
HRAIS can be placed in the market only after:

(1) Fulfilling an ex-ante conformity assessment,
2) Undergoing a Conformité Européenne (CE) certification scheme,
(3) Establishing a quality management system, which is subjected to independent audit,
(4) Conducting a Fundamental Rights Impact Assessment, and
(5) Implementing post-market monitoring plans.
Mandates that a continuous and iterative risk management system be set up for HRAIS. This shall comprise of the following:

a. Identification and analysis of known and foreseeable risks,
b. Evaluation of risks based on post-market monitoring, and
c. Adoption of suitable risk management measures for the same.
Providers of HRAIS must ensure that the operation of their systems is sufficiently transparent and is accompanied with well-articulated instructions for use. Providers of HRAIS must report:

a. Technical documentation that demonstrates compliance with the legislation before being placed on the market
b. Use of automated event-logging capabilities
c. Serious incidents or malfunctions that may qualify as a breach of the EU AI Act.

Algorithmic Accountability Act, United States

First proposed in 2019 and touted as a key step towards ensuring more AI transparency and accountability in the United States, the Algorithmic Accountability Act (AAA) seeks to mandate companies to identify and resolve AI biases in their systems. If promulgated, the 2022 version of the legislation would be enforced by the Federal Trade Commission (FTC) – empowering it to develop reporting guidelines and assessments, provide annualized aggregated trends on the data it receives, and conduct audits of AI systems developed by vendors and deployed by organisations to facilitate decision-making.

The AAA governs Automated Decision Systems (ADS), and places stricter obligations on ADS used to make Critical Decisions. Categorised as Automated Critical Decision Processes (ACDPs), these involve automated processes that may have any legal, material or similarly significant effect on an individual’s life, and cover the following categories:

9 Categories - High-Risk Applications

Requirements for ADS/ACDPs:

Assessment Risk Management Transparency, Reporting, Documentation and Notification
Impact Assessments:
Mandates covered entities to perform Annual Impact Assessments on their ADS/ACDPs, which should include the following:

a. Details of the existing process that the ADS seeks to replace, with a comparison of the same
b. Stakeholder consultations
c. Privacy risk evaluations
d. Metrics and criteria for robust performance
e. Ongoing evaluation of performance of ADS/ACDPs for different demographic groups
f. Training and Education measures
g. Assessment on the need for guardrails or limitations on the use of ADS
h. Documentation of metadata/ other inputs used to develop, train, test and maintain the ADS
I. Evaluation of transparency mechanisms offered
j. Identification of potentially adverse effects and relevant mitigation measures
k. Documentation of processes like ADS development, testing and deployment
l. Tools, protocols and standards to improve the ADS' performance, fairness, transparency, explainability, privacy, etc
Provides ex-ante and ex-post risk management mechanisms through Annual Impact Assessments. (1) Summary Report: Directs covered entities to submit an Annual Summary Report to the FTC, only containing information on:

a. Explanation of the critical decision,
b. Intended purpose
c. Stakeholders consulted
d. performance methods and metrics
e. ADS performance on different demographic groups;
f. Limitations on use of the ADS;
g. Modalities to ensure ADS transparency and explainability h. Potential harms and corresponding mitigation strategies

(2) Public Registry: Empowers the FTC to develop reporting guidelines for summary reports and directs it to develop a public registry based on the same. This registry is envisaged to provide consumers with key information about data sources, high level metrics, and where available, documentation of grievance redressal mechanisms.

Stop Discrimination by Algorithms Act, Washington DC

Introduced in 2021, the Stop Discrimination by Algorithms Act (SDAA) seeks to prohibit organisations operating in Washington DC from deploying algorithms that make decisions based on protected characteristics like race, religion, colour, sexual orientation, and income levels, among others. Enforceable by DC’s Attorney General (OAG-DC), the SDAA would mandate audits and specific transparency requirements, with fines amounting to $10,000 for individual violations. The proposed legislation was reintroduced in February 2023, and covers algorithmic processes used to provide ‘important life opportunities’ in the following domains:

Important Life Opportunities - High-Risk Applications

Requirements:

Assessment Risk Management Transparency Reporting, Documentation and Notification
(1) Audit Requirement: Directs covered entities to audit their algorithmic capabilities on a yearly basis to determine cases of algorithmic discrimination based on protected characteristics. Additionally, mandates entities to create and retain an audit trail (containing details of algorithms and training data) for a period of at least five years for each determination.

(2) Impact Assessments: Mandates entities to conduct annual impact assessments of: a. Existing algorithmic determination systems, and b. New algorithmic determination systems that have not been deployed
Provides ex-ante and ex-post risk management mechanisms through Annual Impact Assessments. (1) Public Notice: Directs covered entities to develop a one-page public notice in English and in any other language spoken by at least 500 individuals in DC, describing modalities of how personal data is being collected, processed, and used in algorithmic decision-making. These notices are required to be sent to individuals before algorithms are deployed for decision making.

(2) Annual Report: Directs covered entities to submit an annual report to the OAG-DC containing information on:

a. Data, methodologies and optimisation criteria used to develop algorithms
b. Training data
c. Performance metrics to assess system robustness
d. Frequency of Risk and Impact Assessments
e. Description of each algorithmic determination
f. User grievances on algorithmic determinations
Mandates that covered entities notify and provide in-depth explanations of adverse action to individuals, explaining the factors on which the unfavourable algorithmic determination was based on.

Covered entities are directed to retain an audit trail (containing details of algorithms and training data) for a period of at least five years for each determination.

Assembly Bill 331 on Automated Decision Tools, California

Like Washington DC, California is at the forefront of state-level AI regulation, aiming to enhance safety and fairness by proposing legislation to regulate automated tools used to make consequential life decisions. Assembly Bill 331 was introduced in January 2023 and seeks to prohibit the use of Automated Decision Tools (ADTs) that contribute to algorithmic discrimination. This is defined as differential treatment or impact that disfavours people based on their actual or perceived race, colour, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law.

The Bill establishes distinct obligations for Developers (entities involved in coding, designing, or substantially modifying) and Deployers (entities utilising) ADTs to facilitate 'Consequential Decisions' in the following domains:

ADTs to facilitate 'Consequential Decisions' - High-Risk Applications

Requirements for ADTs:

Assessment Risk Management Transparency, Reporting, Documentation and Notification
(1) Impact Assessment: Mandates Deployers and Developers to perform an Impact Assessment (IA) on any ADT used before or on 1 January 2025.

IAs by Deployers will include:

a. Statement of the ADT's intended purpose
b. Description of the ADT's outputs, and the process behind it
c. Types of data collected by the ADT when used to make consequential decisions
d. Extent to which the deployer's use of the ADT is consistent/varies with the developer's use
e. Analysis of potential adverse impacts based on protected characteristics
f. Safeguards implemented to address foreseeable risks of algorithmic discrimination
g. Description of ADT use by a person
h. Description of how the ADT will be evaluated for validity/ relevance

IAs by Developers will include points (a), (b), (c), (e), (f) and (g) of Deployer requirements.

Both entities are also required to perform additional IAs of significant updates to ADTs at the soonest.
Governance Program: Directs Developers and Deployers of ADTs to establish a Governance Program to measure and govern foreseeable risks of algorithmic discrimination. These include:

a. Organisational details to monitor regulatory compliance
b. Safeguards to govern identified risks
c. Annual review of policies and practices to ensure compliance
d. Reasonable adjustments to technical and administrative safeguards considering changes in technology, associated risks, or the state of technical standards, among others.
(1) Artificial Intelligence Policy: Developers and Deployers are required to publish an easily accessible artificial intelligence policy that provides a summary of:

a. Types of ADTs currently in use
b. How foreseeable risks of algorithmic discrimination are being managed

(2) Opt-out: If a consequential decision is made solely using an ADT, deployers will be required to accommodate a person's request to not to be subjected to the same (if technically feasible).

(3) Developer-specific requirements: Developers are mandated to share the following with Deployers:

a. Statement of intended use
b. Documentation on known limitations of the ADT, including risks of algorithmic discrimination
c. Description of training data
d. Explanation of how the ADT was assessed for validity and explainability before sale/ licensing.

(4) Disclosures and Notifications: Deployers are directed to notify all individuals who may be impacted by an automated decision making, before and during deployment. This notification should consist of:

a. A statement of the ADT’s intended purpose
b. Deployer’s contact information
c. An easy-to-understand description of the ADT's human and automated components, including how they contribute to the final decision-making process.

Artificial Intelligence and Data Act (AIDA), Canada

Following in the footsteps of the EU AI Act, the Artificial Intelligence and Data Act (AIDA) envisages the creation of a risk-based regulatory approach to enable safety, fairness and transparency in AI systems developed and used in Canada. In doing so, the proposed act establishes ‘High-Impact Systems’ to include AIs that may adversely affect human rights, or bear risks of harm and safety. While specific criteria for identifying systems as High-Impact are yet to be delineated by future regulatory efforts, the current text takes offence to the use of AI systems that may cause biased outputs and serious harm, are created using unlawfully obtained personal data, and are used to intentionally defraud the Canadian public. When enforced, the legislation will be administered by an Artificial Intelligence and Data Commissioner and will penalise those found to be violating and contravening the AIDA’s provisions.

Requirements for High-Impact Systems:

Assessment Risk Management Transparency Reporting, Documentation and Notification
(1) Qualification: Persons responsible for an AI system must assess whether it qualifies as High-Impact, in accordance with future regulations.

(2) Audits: In cases of potential harm and biased outputs, the Minister of Industry is empowered to order an independent audit.

(3) Cessation: In cases where there is a risk of imminent harm, the Minister is empowered to order the cessation of an AI system, and publicly disclose violations of the Act.
In line with future regulations, persons responsible for High-Impact systems will be required to:

a. Develop and implement measures to identify, assess and mitigate harms or biased outputs, and
b. Develop mechanisms to ensure compliance
Public Notice: Persons responsible for High-Impact Systems will be required to publicly publish a description of the system in plain language, which mentions:

a. How the system is intended to be used
b. Types of decisions, predictions, recommendations, and content it intends to generate
c. Mitigation measures
d. Any other information, as prescribed in future regulations
Persons responsible for such systems are required to maintain records entailing measures taken on risk assessment, management, and mitigation.

Where applicable, these records should also contain reasons to support whether the AI system qualifies as High-Impact or not.

What’s next

There is a pressing need to develop trustworthy AI systems that are embedded with ethical principles on fairness and harm mitigation from the get-go. With regulatory efforts on AI taking momentum globally, businesses of all sizes will need to act early, and proactively, to be compliant.

At Holistic AI, we have pioneered the fields of AI ethics and AI risk management and have carried out over 1000 risk mitigations. Using our interdisciplinary approach that combines expertise from computer science, law, policy, philosophy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.

To find out more about how Holistic AI can help you get compliant with upcoming AI regulations, schedule a demo with us.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo