US Federal Agencies Release a Joint Statement on Automated Systems

Authored by
Nikitha Anand
Policy Analyst at Holistic AI
Published on
Apr 8, 2024
share this
US Federal Agencies Release a Joint Statement on Automated Systems

On 3 April 2024, a joint statement on the Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems was announced by several US federal agencies. Signed by leaders from the EEOC, Consumer Financial Protection Bureau, the Department of Justice, the Federal Trade Commission, the Department of Education, the Department of Health and Human Services, the Department of Homeland Security, the Department of Housing and Urban Development, and the Department of Labor, the joint statement reiterates the intention of federal agencies to enforce legal protections that could be violated by the use of automated systems. Here, automated systems are defined as ‘software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.’

In the statement, the agencies express their commitment to monitoring the evolution of automated tools as well as simultaneously promoting responsible innovation. They also reinforce the applicability of existing laws to automated systems and their responsibility to ensuring that the development of such systems happens in accordance with these laws.

This statement is not the first attempt by the EEOC to target discrimination and bias in automated systems. The agency released a similar statement last year in conjunction with the Consumer Financial Protection Bureau, the Department of Justice’s Civil Rights Division, and the Federal Trade Commission. The expansion of the number of agencies involved in this year’s statement represents the growing attention the federal government is paying to the need to regulate automated systems as well as the commitment to enforcing existing laws that can apply to such technologies.

Federal Agency Enforcement Actions Against Automated Systems

All agencies involved enforce civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections under legislations such as the Civil Rights Act, Fair Housing Act, Americans with Disabilities Act, Fair Credit Reporting Act, and other federal laws, all of which apply to AI. Indeed, the agencies have already shown how their enforcement authority applies to AI and automated systems:

  • The EEOC’s  suing of the iTutorGroup for $365000 over its  algorithm-driven age discrimination highlights its crack down on automated tools. Additionally, it has also issued two technical documents, one explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees, and the other explaining how the use of software, algorithms, and AI may lead to disparate impact under Title VII of the Civil Rights Act of 1964.
  • The Consumer Financial Protection Bureau (CFPB) is prioritizing protecting consumers from black-box credit models by publishing two circulars that confirm that federal consumer financial laws, including adverse action requirements, apply regardless of the technology being used. The complexity and opaqueness of the system used is not a defence for violating these laws, and creditors must explain their decisions by providing accurate and specific reasons for the adverse action.
  • The Department of Justice’s (DOJ) Civil Rights Division filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services. The Division’s Consumer Protection Branch is also leading efforts to investigate and prosecute crimes involving the use of generative AI, complementing existing initiatives targeting elder fraud, romance scams, unlawful activities by payment processors, and unlawful robocalls.
  • The Federal Trade Commission (FTC) has banned the company Rite Aid from using AI-based facial recognition technology for surveillance purposes after the company improperly deployed such technology. It has also required firms to destroy algorithms or other work product that were trained on data that should not have been collected.
  • The Department of Education’s Office for Civil Rights enforces several federal civil rights laws and investigates allegations concerning the discriminatory use of automated systems in educational technologies.
  • The Department of Health and Human Services is conducting several pieces of AI-related regulatory work. It recently finalized a rule interpreting Section 1557 of the Affordable Care Act to prevent AI-powered algorithms from contributing to bias and discrimination in healthcare, published the final HTI-1 rule on algorithmic transparency in electronic health records, and released the FDA’s 2022 guidance recommending that some AI-powered clinical decision support tools should be regulated as medical devices. 
  • The Department of Homeland Security (DHS) released its Policy Statement 139-06, which establishes the foundation for the agency’s responsible use of AI with a clear set of principles.
  • The Department of Housing and Urban Development (HUD) is doing work to prevent automated technologies from being used to disproportionately deny access to housing. The agency has released guidance that housing providers should avoid using third-party screening companies that utilize algorithms that may contain racial or prohibited bias in their design, have not been shown to reliably predict risk, may produce inaccurate information about the applicant, or make the decision for the housing provider to  
  • The Department of Labor’s (DOL) Office of Federal Contract Compliance Programs will analyse federal contactors’ use of AI-based selection procedures and has updated its compliance review process to require documentation to better identify discrimination related to AI and automated systems in recruitment, screening, and hiring by federal contractors. 

How can AI and Automated Systems Violate Non-Discrimination Laws?

The main aim of the statement is to reiterate the fact that AI and automated systems are covered under existing laws, and that the technology being a ‘black box’ does not create a loophole for compliance. The statement suggests some sources of unlawful discrimination, or bias, from the use of automated systems:

  • Training data – Automated systems trained on unrepresentative or imbalanced data, biased data, or erroneous data can skew outcomes and lead to discriminatory outcomes, particularly if data acts as a proxy for protected classes
  • Lack of transparency – it is not uncommon for automated systems to be a black-box, making it difficult for developers and other entities to know whether a system is fair since its inner workings are unknown
  • Design and use – there can be a lack of consideration for the social context that technical systems might be used in by developers, meaning that systems may be designed and developed on the basis of flawed assumptions about users and societal impact

Prioritize compliance

The joint statement highlights the important of compliance with both existing laws and AI-specific laws when using automated systems. Compliance is vital to uphold trust and innovate with AI safely. To find out how Holistic AI can help you get your algorithms legally compliant, get in touch at we@holisticai.com.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo