🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

EEOC’s Guidance on AI Systems under Title VII of the Civil Rights Act 1964

Authored by
No items found.
Published on
May 25, 2023
share this
EEOC’s Guidance on AI Systems under Title VII of the Civil Rights Act 1964

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC), the federal agency charged with administering federal civil rights laws (including Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA), released a "technical assistance document" titled "Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”

This publication is the latest of the EEOC’s initiatives on AI, having previously held a public hearing on the use of automated systems, including artificial intelligence in employment decisions; releasing a joint statement with the Consumer Financial Protection Bureau (CFPB), the Department of Justice’s Civil Rights Division (DOJ), and the Federal Trade Commission (FTC) reiterating how existing laws apply to AI; and emphasising AI as an area of focus in its Draft Strategic Enforcement Plan for 2023-2027. In this post, we provide a summary of the technical assistance document.

What is Title VII of the 1964 Civil Rights Act?

In 1978, the EEOC adopted the Uniform Guidelines on Employee Selection Procedures under Title VII to help employers determine if their tests and selection procedures are lawful for purposes of Title VII disparate impact analysis. Under Title VII, employment discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin is prohibited, unless justified by the job-related in question or business necessity.

Key Definitions

The Guidance sets out some key definitions including:

  • Software - information technology programs or procedures that provide instructions to a computer on how to perform a given task or function.
  • Application software (also known as an “application” or “app”) - a type of software designed to perform or to help the user perform a specific task or tasks.
  • Algorithm - a set of instructions that can be followed by a computer to accomplish some end, such as to process data to evaluate, rate, and make other decisions about job applicants and employees.
  • Artificial Intelligence - machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” National Artificial Intelligence Initiative Act of 2020 at section 5002(3).
  • AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.

The implications for employers under Title VII

The technical assistance guidance includes several key takeaways for employers using selection tools that incorporate or are driven by AI under Title VII:

  • Liability: Employers are accountable for disproportionate impacts caused by AI selection tools, regardless of whether an external vendor developed them. The EEOC advises employers to evaluate a vendor's measures to identify potential adverse effects before outsourcing the administration of such tools. The guidance emphasizes that an employer may still be held liable even if a vendor's assessment is incorrect (such as wrongly indicating that the tool does not have an adverse impact when it actually does).
  • Self-Auditing: Employers are advised to regularly self-audit selection tools to assess any adverse impact on protected groups. If adverse impact is found, the EEOC encourages employers to modify the tool to minimize such effects.  The Guidance makes the point of stating that should an employer fail to take steps to adopt a less discriminatory algorithm during the development process, this might give rise to liability.
  • Four-fifths rule is not an indicator of disparate impact: The four-fifths rule is a rule of thumb for determining adverse impact, or differential hiring rates, that was introduced by the EEOC and outlined in the Uniform Guidelines on Employee Selection Procedures. Under the rule, the hiring rate of one group must not be less than four-fifths of the hiring rate of the group with the highest rate for a selection procedure to be considered fair.  However, following this rule does not guarantee the absence of disparate impact. Smaller differences in selection rates may still indicate an adverse impact where the tool is used for a substantial number of selections or when the employer discourages certain applicants from applying. Even if a tool meets the four-fifths test, it can still be deemed to have an unlawful adverse impact if it leads to a statistically significant difference in selection rates.

Compliance and responsibility: EEOC's stance on AI and existing legislation

While the EEOC's technical assessment offers non-binding legal guidance, it is still a clear sign that companies should be mindful of their responsibility. This messaging from the EEOC is not new. Instead, it reaffirms the agency's position that AI systems and tools must adhere to laws and regulations concerning equal employment opportunity. The EEOC, and other bodies, are increasingly stressing that AI-driven and automated decisions are not exempt from current rules and regulations. Although there is a wave of legislation coming that does specifically target AI, it is important to keep compliance with existing laws in mind too.

To find out how Holistic AI can help you stay compliant, get in touch at we@holisticai.com.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo