Blog Post

Written by
Holistic AI
Category
Structured content
Date
June 6th, 2022
Regulating AI: A Review of the US Algorithmic Accountability Act
Senate building in the US Capitol, Washington DC

Introduction

Democratic US senators have proposed legislation intended to control, minimize, and mitigate the risks of automated decision making systems–in other words, processes that use algorithms to assist and replace human intellectual labor, or what we would colloquially call ‘artificial intelligence’ (Russell & Norvig, 2002).

Here, we provide a briefing on the Algorithmic Accountability Act (Clarke et al., 2022). Our key takeaways are as follows:

  • Impact Assessments. The bill requires companies to submit impact assessments of artificial intelligence (AI) systems to the Federal Trade Commission (FTC).
    • We argue that impact assessments are a useful and adaptive tool for regulation.
    • We suggest that the FTC set minimum thresholds for unacceptable risk.
    • We suggest that the FTC formulate standardized formats for impact reports to make assessment scalable.
  • Consultation. The bill requires stakeholder consultation as part of the impact assessment process.
    • We argue that consultation is an important tool for scrutiny, and brings together a diversity of experiences and perspectives for maximal epistemic effectiveness.
    • We suggest that, for consultation to work, the legislation should include Explainability as a requirement.
  • Scope. The legislation includes applications of AI across sectors, but differentiates between service providers based on their financial revenue and impact.
    • We argue that ‘targeting’ larger companies strikes a useful balance between innovation and responsibility.
    • We argue that cross-sector regulation ensures greater consistency and protection against harm.

The intended audience of this paper is members of the tech industry, and policymakers interested in tech. The purpose of the briefing is to provide a summary overview of the key legislative mechanisms of the bill, and to provide analysis of the features of the bill that require improvement and those features that could be adopted elsewhere.

Background: Concerns About AI Impact

The introduction of artificial intelligence has raised concerns, both existential (Bostrom, 2014) and more imminent. Several high-profile scandals have illustrated the economic, social, and political risks of AI:

  • Financial risk. In 2012, Knight Capital suffered a $450 million loss due to a faulty algorithm in its trading system. The error precipitated the bankruptcy of the one-time leader in the US equities market.
  • Unfairness risk. Amazon and HireVue have both had to shelve expensive AI-based recruitment tools amid accusations that the algorithms were biased against members of minority groups.
  • Political risk. The Cambridge Analytica scandal raised the concern that the algorithms structuring social media sites were susceptible to manipulation and created ‘echo chambers’ that radicalized individuals’ political views.

The inherent risks of AI systems have inspired legislators in numerous jurisdictions to table regulation of the burgeoning technology: the European Union is currently considering its Artificial Intelligence Act that will prohibit high-risk applications of AI; the United Kingdom has issued its Artificial Intelligence Strategy outlining its strategy to create an ecosystem of trust in AI; New York City has passed a local law requiring audits of all AI systems used for recruitment and human resource management.

It is in this context that we must consider the proposed Algorithmic Accountability Act–not as a US replication of the aforementioned legislation, but as an intervention with its own unique approach in responding to a global phenomenon.

Overview: Algorithmic Accountability Act

The key intervention in the legislation is to mandate technology service providers, within the scope of the legislation (see section 1.1), to provide algorithmic impact assessment (AIA) reports detailing the risks and benefits of their automated system. The legislation will mandate the Federal Trade Commission (FTC) to issue regulations requiring providers to:

  • Submit their AIA reports to the FTC.
  • Comply with the reporting format stipulated by the FTC.
  • Meaningfully consult with stakeholders.
  • Implement strategies to mitigate and reduce harms occasioned by the system.
  • Disclose to any commercial partners that contribute to the system that they are covered by the legislation.
  • Train staff concerning the potential risks occasioned by the system.

Scope of Legislation

The legislation limits the scope of the regulations to so-called ‘covered’ providers:

  • Minimum requirement. a company can only be covered if it deploys augmented critical decision processes (ACDP): any process that includes automation and relates to ‘education and vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, legal services’ or any other service that the FTC deems to be critical.
  • High threshold. Any provider with a gross revenue above $50 million that employs any ACDP is covered by the legislation.
  • Low threshold. Any provider with a gross revenue above $5 million that employs an ACDP that uses an automated decision system (ADS): a decision-making system that uses computation to inform decisions.

Assessment Requirements

The core of the bill is its requirement to submit AIA reports. In essence, the report is a risk-cost-benefit analysis of the automated systems in their decision making. The Act will require two reports: one internal and another external report submitted to the FTC.

The external report must contain the details of the following:

  • The critical decision for which the ACDP is used;
  • The purpose and need for the ACDP (benchmarked against the previous system);
  • A description of the data inputs and their sources;
  • The performance of the ACDP;
  • The consultations undertaken by the provider concerning the ACDP;
  • The degree of explainability and transparency of the ACDP;
  • An assessment of the risks of harm caused by the ACDP;
  • An explanation of any failures to comply with the reporting requirements.

The internal report, by contrast, will require more extensive documentation, including:

  • Detailed reporting of stakeholder engagement;
  • Tests of the privacy risks in the system;
  • Detailed reporting of the data used for development, tests, maintenance, and updates;
  • Evaluation of the rights of consumers in relation to the developer, including the right to contest decisions.

The internal report is more detailed than the summary report to the FTC, presumably in part as a means of allowing companies to protect certain trade secrets.

Reporting and Enforcement Mechanisms

The legislation introduces three key regulatory mechanisms:

  • Registry and Reporting. The legislation requires the FTC to develop a registry of ACDP and ADS reports, and to issue high-level reports identifying key problems, metrics, and lessons-learned.
  • Penalties. The legislation stipulates violations of requirements should be treated as unfair and deceptive practice, and empowers States to bring civil action against companies.
  • Bureau of Technology. The legislation establishes a subsidiary advisory bureau to guide the FTC’s tech policy.

‘Targeting’ Big Tech

The first noteworthy feature of the legislation is that it limits the scope of regulation according to providers’ financial revenue. The effect of this is to limit much of the regulatory burden to larger companies and companies deploying more sensitive automated systems. In other words, Big Tech–the dominant and most prestigious technology providers–will all fall in the scope of the legislation, whereas many or most Small and Medium Enterprises (SMEs) are likely to be excluded from its scope.

This is a useful regulatory mechanism for a number of reasons:

  • It allows regulators to focus their limited resources on those companies with the highest impact, which are therefore also higher risk.
  • By leaving them outside of its scope, the legislation does not place a regulatory burden on SMEs, thus limiting the barriers of entry for new companies that can drive innovation and competition.

Recommendations: we think this is a useful regulatory mechanism as a means of balancing responsibility to users and encouraging disruptive innovation.

The Value of Impact Assessments

The focus of the legislation is on the submission of algorithmic impact assessments (AIA) by service providers to the FTC. As we suggest elsewhere [Trengove et al, 2022], this must be compared to the method of categorising the risk levels of AI applications according to their use-type (i.e., categorically prohibiting the application of AI towards predefined purposes). This mechanism has the following advantages:

  • Innovation. AIAs have the benefit of giving service providers the scope to innovate and expand the application of AI, because it does not restrict entire categories of use-type.
  • Adaptive Mitigation. The AIA, as it is in the bill, also offers service providers the opportunity to use different mitigation techniques, allowing them to adapt to their needs.
  • High-Level Analysis. The AIA registry will also give the FTC the data to perform high-level analyses of trends in AI risk.

However, the approach has the following disadvantages:

  • Thresholds. The AIAs are intended to minimise risks, but they do not set thresholds for the amount of risk that is unacceptable. In this sense, the AIA mechanism is less likely to inhibit risky applications of AI. There is a risk therefore that the Act will be ineffective.
  • Regulatory Capacity. Large service providers will have hundreds or thousands of algorithms. If they are to perform AIAs for each algorithm, it is difficult to see how the FTC will have the capacity to scrutinise them thoroughly.

Recommendations: we recommend that the FTC guidelines (a) provide some indication of unacceptable risks, and (b) focus on standardising the reporting format in a way that is conducive to scalable inspection.

Consultation and Explainability

One of the promising features of the legislation is that it will require consultation with external and internal stakeholders as part of the AIA. This has the potential to reduce risk by considering a wide range of perspectives, and by opening up AI systems to wider scrutiny.

However, if consultation is to be an effective tool for risk-management and mitigation, it is important that those affected by the risks of AI adequately understand how the AI systems work. This speaks to the more general problem of Explainability in AI: automated decision-making is often too opaque for those affected by the decisions to understand, challenge, or appeal decisions.

Recommendations: although the bill already includes explainability in its list of considerations for the AIA, we suggest that explainability be included in the assessment of the consultation process to ensure that consultation is productive and fair.


Cross-Sector Rules

The bill intervenes at the federal level, mandating a set of rules for applications of AI across all sectors, as opposed to devolving this authority to industry regulators.

This strategy has a number of advantages:

  • Guarantees. This approach guarantees that all applications of AI will have been subject to the scrutiny, ensuring that service providers cannot ‘shop around’ for lesser regulation. This also provides users with a certain level of guarantee, encouraging trust in AI.
  • Comprehensive But Adaptive. The risk with wide cross-sector regulation is often that it will not be suitably flexible to meet the needs of different industries. However, the benefit of the AIAs is that they are sufficiently open-ended that they can be adapted to meet different circumstances.
  • National Data: By mandating all service providers to submit AIAs, the FTC will be able to collect and analyse a much broader (and more useful) range of data.

Recommendations: We think this is a useful regulatory strategy and can ideally be supplemented by industry-specific regulations.

Concluding Remarks

The bill is yet to pass through Congress and may be subject to amendment, but it is important to assess the key regulatory mechanisms it deploys, as we have done here.

In broad terms, we think that the main regulatory mechanisms in the bill are adaptive and strike a good balance between managing risk and encouraging innovation. However, we argue that it is important to make some amendments to the mechanisms to ensure their effectiveness. Our analysis makes the following recommendations:

  • Thresholds. The FTC should provide some indication of unacceptable risks that set the minimum threshold outcome for AIAs.
  • Standardised reports. The FTC should standardise the reporting format in a way that is conducive to scalable inspection.
  • Explainable consultation. Explainability should be included in the assessment of the consultation process to ensure that consultations are productive and fair.

Moreover, we think the legislation has useful lessons for other legislators:

  • ‘Targeting’. Distinguishing between covered and not-covered companies according to revenue is a useful mechanism for balancing innovation and responsibility.
  • Consultation. Including consultation in the impact assessment process widens the scrutiny of the AI and includes a wider range of risks in the assessment.
  • Cross-Sector Regulation. The approach in the bill of mandating rules for all applications of AI–supplemented with industry-specific rules–provides an effective guarantor of minimum safety.


Reference List

Interested in our company?