On 27 November 2023, California’s Privacy Protection Agency (CPPA) issued draft regulations on the use of Automated Decision-making Technologies (ADTs). Once adopted, these rules will join the CCPA’s ever-increasing remit of privacy regulations that seek to enable and enhance accountability, explainability and transparency in automated decision making.
While the Agency has not initiated the formal rulemaking process for ADTs, the current iteration of the draft rules has been published for public consultation and is subject to change. These rules will be introduced to the CCPA’s Board and subsequently be discussed and deliberated upon on 8 December, after which rulemaking processes will commence.
Under these rules, an ADT has been defined as any “system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making.”
This definition is also seen to include algorithms with individual profiling capabilities. Here, profiling refers to the processing of personal information to evaluate an individual, including their performance at work, economic situation, health, preferences, interests, reliability, behaviour, and location and movements.
There are three overarching elements under these rules, which focus on:
This development strongly signals California’s intentions to pioneer and lead the regulatory discourse on Artificial Intelligence at the state-level, having proposed multiple initiatives over the course of this year alone. Indeed, these include draft legislations such as Assembly Bill 331, which seeks to prohibit the use of automated decision tools that result in algorithmic discrimination, Assembly Bill 302, which seeks to establish dedicated regulatory oversight on ADTs, Senate Bill 313, which seeks to regulate the use of AI by State Agencies, as well the recent Executive Order by Governor Newson that lays out a strategic plan for how California will approach the progress and proliferation of generative AI.
California is by no means the only state in the US taking decisive action to make AI safer and fairer. While much of this activity has targeted HR Tech, other sectors such as insurance, online safety and generative AI are also receiving a lot of attention, and those developing and deploying AI systems will soon face stringent requirements. Acting early is the best way to prepare and remain compliant. Schedule a call with our experts to find out more about how Holistic AI can help you navigate both existing and upcoming regulations.
DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts