Recent years have seen multiple harms resulting from the mismanagement of artificial intelligence (AI) systems in human resources (HR) practices. Indeed, there are several examples of these systems going wrong, from Amazon’s scrapped resume screening tool that was biased against female applicants, to Workday being sued for alleged discrimination based on race, age, and disability.
In light of these risks, several policymakers in the US have proposed legislation targeting automated decisions, with some of the latest action being taken by the state of Massachusetts with House Docket 3051, an “Act Preventing a Dystopian Work Environment” (HD 3051). With almost identical text to California’s Workplace Technology Accountability Act, HD 3051 seeks to limit workplace monitoring to only essential job functions, give employees more transparency about how their data is used, and reduce actual and potential harm by requiring data protection and algorithmic impact assessments.
In this blog post, we outline what you need to know about this proposed law, focusing on the requirements for automated decision systems and the rise of transparency in the HR sector.
Key definitions
The Act seeks to regulate four distinct systems: Â
The Bill applies to workplaces in Massachusetts, which are defined as a “location within Massachusetts at which or from which a worker performs work for an employer”. Employers that operate from a workplace in Massachusetts who collect data about their workers, use electronic monitoring, or use ADS tools to make employment-related decisions about workers, along with vendors acting on their behalf, are within the scope of the legislation. Under the proposed law, an employer is any person who directly or indirectly employs or has control over the wages, benefits, compensation, hours, working conditions, or access to work or any worker, including contractors.
HD 3051 outlines several requirements for when worker data is collected, stored and used by an employer, the first of which is to provide notice. Employers should include in the notice the type of data being collected, the purpose for collecting data, and how it will be used to make decisions. The notice must also be “clear and conspicuous”; it cannot simply state an ADS has been used.
Within 10 business days of notification to workers, an employer or vendor acting on behalf of an employer must provide notice to the department of Labour & Workforce Development that an ADS has been used.
Under HD 3051, employers or vendors acting on behalf of an employer that plan to electronically monitor workers should give notice of their planned activity. As above, such notice must be clear and conspicuous. Notification requirements concerning electronic monitoring include:
Employers that develop, procure, use or implement an ADS or WIS are required to complete an algorithmic impact assessment (AIA) or data protection impact assessment (DPIA), respectively. Under HD 3051, impact assessments must occur before the use of the system, or retroactively for systems in use at the time of the legislation coming into effect and should be conducted by an independent assessor with the relevant experience and understanding of the system.
AIAs aim to evaluate the potential risks posed by an ADS. These include discrimination against protected classes, violations of legal rights, direct or indirect physical or mental harms for algorithmic systems, and privacy harms for worker information and algorithmic systems. Assessors should also identify whether a system could have a chilling effect on workers exercising their legal rights or a negative economic or material impact on workers. Impact assessors must also assess whether a system produces errors (false positives and negatives) and the potential to infringe on the dignity and autonomy of workers.
DPIAs evaluate the potential risks of an WIS. Similar to an AIA, these include discrimination against protected classes, privacy harms such as invasive or offensive surveillance, infringement upon dignity and autonomy of workers, and negative economic impacts. An employer is required to give a description of the methodology used to evaluate the identified risks and recommended mitigation measures.
Both AIAs and DPIAs must be conducted by an independent assessor with relevant experience and should include:
Ensuring that workers have access to the data that is held about them and that they can request that any such data be updated is important to upholding transparency. Employers that collect, store, analyze, interpret, or use worker data should provide, upon request and at no additional cost, information to workers in an accessible format. Under the proposed legislation, workers can request information about the types of data an employer has about them, the source of the data, whether it is used as an input or output in an ADS and how it relates to the job function in question. This information should be accurate and kept up to date.
Workers have the right to correct any inaccurate information about worker data that an employer maintains.
HD 3051 would require employers to provide, upon receipt of a verifiable request, the categories and pieces of worker data retained, the purpose and sources of data collection, whether the data is related to the worker’s essential job functions or employment decisions, whether the data is involved in an automated decision system and the names of any third parties from whom the data is obtained or to whom the data is disclosed.
Within 10 business days of notification to workers, an employer or vendor acting on behalf of an employer must provide notice to the department of Labor & Workforce Development that an ADS has been used. Employers must include in the notice the type of data being collected, the purpose for collecting data, and how it will be used to make decisions. In the case of electronic monitoring, workers should be given a clear and conspicuous notice of planned employer activity.
The use of AI in hiring and firing decisions is becoming more prevalent. A recent survey of 300 HR leaders at US companies found that 98% of respondents planned to use algorithms to make layoff decisions in 2023, a trend that raises concerns about the potential for opaque metrics to inadvertently harm minority groups. As AI becomes increasingly pervasive in the workplace, the need for transparency has become more pronounced. Without transparency, there is a risk of creating dystopian environments where AI systems make decisions that do not account for individual circumstances. As such, there is a growing push for greater transparency in AI to ensure that it is used ethically, aligned with organizational goals, and does not cause unintended harm to individuals or groups.
AI regulation is ramping up. To prevent liability, businesses must manage AI risks and implement safety measures, like impact assessments. Early action is key to complying with legal requirements and ensuring responsible use of algorithms. To find out more about how Holistic AI can help you with this, get in touch at we@holisticai.com.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts