If 2023 wasn’t busy enough for AI regulation, don’t worry. Within the first 10 days of 2024, many of the largest states in the United States have already pushed forward a range of legislation.
These legislations build on the December agreement around the EU AI Act, established as a global benchmark, and the enforcement of groundbreaking laws like New York City's Local Law 144 and Colorado’s SB-169, which have underscored the urgency for robust AI governance and risk tracking in AI.
Though these bills have just now been set in motion, they would constitute significant changes to the way AI is regulated in many of the largest economies within the US. Sure, there is a significant chance these bills will change as they progress through their respective legislatures. With that said, they provide a solid signal towards how many of the US’s largest states plan to regulate AI in the future and spell out some steps that organizations can take today to be prepared.
Let’s look at how US regulations are proceeding at the start of 2024, as well as some of the specifics put before legislatures in New York, California, and Florida.
The US Approach to AI Regulation: A State-Level Push with a Vertical Focus
In the US, the drive for AI regulation is increasingly evident at the state level, signaling a significant shift in how AI is governed:
• New York's Legislative Efforts: The state is advancing with laws like the AEDT law A00567, and Senate versions of assembly bills A08158 and A08098, showcasing a proactive stance in AI governance.
• California's AI Bills: California has introduced five new AI bills, focusing on:
- Public contracts: artificial intelligence services: safety, privacy, and non-discrimination standards (SB892) – introduced on January 3, 2024
- California Artificial Intelligence Research Hub (SB893) – introduced on January 3, 2024
- Artificial Intelligence Accountability Act (SB896) - introduced on January 4, 2024
- Autonomous vehicles (AB1777) – introduced on January 4, 2024
- Artificial intelligence: technical open standards and content credentials (AB1791) - introduced on January 3, 2024
• Florida's Transparency Focus: Florida's proposed laws emphasize transparency in AI, with five initiatives covering:
- Autonomous Vehicles - (S1580) – Filed January 5, 2024 with an effective date of July 1, 2024
- AI Transparency in Government Technology (S1680) – Filed January 5, 2024 with an effective date of July 1,2024
- Transparency in Social Media Act (S1448) – Filed January 5, 2024 with an effective date of July 1, 2024
- Computer science education (S1344) – Filed January 4, 2024 with an effective date of July 1, 2024
- Public Records/Artificial Intelligence Transparency Violations (S1682) – Filed 5 January 2024
Combined, these state-level actions reflect requirements that move the US much closer to alignment with the EU's comprehensive approach to AI regulation. In many ways, however, the diversity of the state-level regulatory landscape makes it harder for large organizations to stay on top of all requirements. The key difference we’re seeing can be spelled out in the difference between vertical and horizontal AI regulation.
Vertical vs. Horizontal Legislation in AI Regulation
While the EU has adopted a horizontal approach to AI legislation, addressing multiple use cases under a unified framework, the US has predominantly pursued a vertical strategy:
• Vertical Legislation in the US: This approach targets specific AI applications and sectors. Examples include:
- New York City's Local Law 144 for automated employment decision tools
- Colorado’s SB-169 focuses on life insurance underwriting
- Specific bills in California and Florida addressing specific facets of or applications of AI
- Agencies such as the Consumer Financial Protection Bureau and the Federal Trade Commission have begun aggressively seeking out AI users breaking existing laws among the organizations they are tasked with monitoring.
• Horizontal Legislation in the EU: The EU AI Act is a prime example of horizontal legislation, setting broad standards applicable across various AI applications and industries. Additional horizontal legislation active or pending in the EU includes:
- The Digital Markets Act is a comprehensive law regulating the work of large online platforms across the EU.
- The Digital Services Act regulates intermediaries and platforms like marketplaces, social networks, content-sharing platforms, app stores and other service platforms for AI misuse.
- GDPR applies widely wherever consumer data is used in the EU, including AI use.
Implications for Businesses and Policymakers
This divergence in approaches between the US and EU presents unique challenges and opportunities for businesses and policymakers:
- Adapting to Diverse Regulatory Landscapes: Companies operating internationally must navigate a patchwork of regulations, requiring adaptable and flexible AI governance strategies.
- Opportunity for Tailored Solutions: The vertical approach in the US allows for more specialized regulatory responses, catering to the unique needs of different AI applications.
California, Florida, and New York: Leading States in 2024 US AI Regulation
As we delve into the specifics of state-level AI regulation, California, New York, and Florida emerge as frontrunners, each introducing groundbreaking legislation that sets a precedent for other states to follow.
California, home to more AI companies than anywhere in the world, and the 5th largest economy in the world is particularly noteworthy in its potential regulation. While past AI bills in the state have failed, the continued push to vote on regulation portends likely future regulation.
Their legislative efforts are not just about regulating AI but also about fostering an environment where innovation thrives alongside ethical considerations.
California's Legislative Focus
The primary focus areas of California’s proposed 2024 bills include generative AI and the public sector.
With this said, California's approach to AI regulation is multifaceted, balancing the need for innovation with ethical oversight.
- Artificial Intelligence Accountability Act (SB896) - introduced on January 4, 2024 and referred to the committee on rules, with action expected on or after February 3, 2024. Requires the Government Operations Agency, the Department of Technology, and the Office of Data and Innovation to produce a State of California Benefits and Risk of Generative Artificial Intelligence Report and a joint risk analysis by the Director of Emergency Services, the California Cybersecurity Integration Center, and the State Threat Assessment Center on potential threats posed by generative AI to California’s critical energy infrastructure. State agencies using generative AI in communications would also be required to disclose this, and the risk of using such technology would be required to be evaluated before being adopted.
- Autonomous vehicles (AB1777) – introduced on January 3, 2024 and tentatively scheduled to be heard in committee on February 3, 2024. Seeks to express the intention of the legislature to enact legislation concerning automated vehicles, including requiring them to comply with all traffic laws. To hold permit holders accountable, autonomous vehicles violating traffic laws would be allocated fines and points in the same way as human drivers, and the permittee would be required to pay all fines.
- Public contracts: artificial intelligence services: safety, privacy, and non-discrimination standards (SB892) – introduced on January 3, 2024 and tentatively scheduled to be heard in committee on February 3, 2024. Seeks to require the Department of Technology to establish safety, privacy, and non-discrimination standards relating to artificial intelligence services and, from August 1, 2025, a contract for AI services would be prohibited from being entered into by the state if the provider does not meet the aforementioned standards.
- California Artificial Intelligence Research Hub (SB893) – introduced on January 3, 2024 with action expected on or after February 3, 2024. Seeks to require the Government Operations Agency, the Governor’s Office of Business and Economic Development, and the Department of Technology to collaborate to establish the California Artificial Intelligence Research Hub in the Government Operations Agency to act as a central entity for collaboration between government agencies, academic institutions, and the private sector. A key role of the Hub would be safeguarding privacy, advancing security, and addressing risks and potential harms to society.
- Artificial intelligence: technical open standards and content credentials (AB1791) - introduced on January 3, 2024 and tentatively scheduled to be heard in committee on February 3, 2024. The bill declares the intent of the legislature to amend the bill with provisions requiring California-based companies in the business of generative AI to implement the Coalition for Content Provenance and Authenticity’s technical open standard and content credentials into their tools and platforms.
Another significant initiative, the California Artificial Intelligence Research Hub, represents a collaborative effort. It aims to bring together government entities, academic institutions, and the private sector, focusing on developing AI that upholds privacy and security while addressing societal risks and potential harms.
Florida's Emphasis on AI Transparency
Florida's legislative proposals, center around transparency but in a wide range of AI application types including AI in education, public AI use, AI in social media, the creation of a research council, and a bill focusing on autonomous vehicles.
- Autonomous Vehicles (S1580) – Filed January 5, 2024 with an effective date of July 1, 2024 if passed. Amends section 316.85 of the Florida Statutes such that a licensed human operator must be physically present in a fully autonomous vehicle that has a gross vehicle weight of 10001 pounds or more while the vehicle is operating on a public road in Florida. From July 1, 2024, if such a vehicle is involved in a collision that results in property damage, bodily injury to or death of a person, the manufacturer must report the collision to the Department of Highway Safety and Motor Vehicles within 10 days if the collision occurred while the automated driving system was engaged.
- Artificial Intelligence Transparency (S1680) – Filed January 5, 2024 with an effective date of July 1, 2024 if passed. Creates the Florida Government Technology Modernization Council within the Department of Management Services to study and monitor the development and deployment of artificial intelligence systems and provide reports on such systems to the Governor and the Legislature. An annual report must be submitted by the council that includes data, trends, analysis, findings, and recommendations for state and local action, particularly concerning privacy protection and the prevention of AI-based discrimination.
- Transparency in Social Media Act (S1448) – Filed January 5, 2024 with an effective date of July 1, 2024 if passed. Requires foreign-adversary-owned entities operating a social media platform in Florida to publicly disclose the core elements of the platform’s content curation and algorithms. This includes factors that influence content ranking and visibility, measures taken to address misinformation and harmful content, and the process of content personalization and targeting. Entities must also make publicly available the source code of their algorithms through an open-source license and implement a user verification system for those purchasing advertisements concerning social or political issues.
- Computer science education (S1344) – Filed January 4, 2024 with an effective date of July 1, 2024 if passed. Amends s. 1003 of the Florida Statutes. Establishes an AI in education task force to evaluate the applications of AI in education and develop policy recommendations for responsible and effective use of AI by examining the ethical, legal, and data privacy of AI usage in education.
- Public Records/Artificial Intelligence Transparency Violations (S1682) – Filed January 5, 2024 and exempts certain AI transparency investigation records from public disclosure to protect active investigations, personal privacy, and business interests. Does not set out new requirements for AI transparency.
New York Following Colorado’s Lead
Clearly inspired by the Colorado insurance law, New York introduced A08369 on December 13, 2023 with an almost identical text to the Colorado law, except swapping out “commissioner of insurance” for “superintendent [of financial services]”, for example.
The law restricts insurers’ use of external consumer data correlated to protected attributes such as ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. External data that the legislation would govern would include:
- Credit scores
- Social media habits
- Homeownership
- Educational attainment
- Occupation
- Licensures
- Civil judgments
- And court records
The actual requirements of the law are still being developed. In this case, it would be the role of the superintendent to outline specifics about many areas of the law as they apply to different types of insurance.
Additionally, it should be noted that the regulation would not apply to title insurance, bonds executed by qualified surety, or insurers that issue commercial policies unless they issue business owners’ policies or commercial general liability policies that have annual premiums of less than $10,000.
Implications for Businesses and Policymakers
For businesses, the importance of Florida, California, and New York can’t be understated. The emergence of potential (and differing) bills in all three creates a scenario in which large organizations will be required to comply with each and all of the states’ stipulations.
Rather than silo initiatives to respond to every applicable regulation, organizations should look for similarities in requirements and start laying the groundwork for standardization from design through maintenance of AI systems.
Due to the application-specific nature of this vertical legislation, organizations will also need a clear picture of what applications they’re using. Robust inventorying and registration of AI systems and dependencies helps to plot risk vectors.
Many of these emerging laws also emphasize cross-sector collaboration. Policymakers are seeking to encourage partnerships and create frameworks that support an up-to-date evolution of AI regulation. This offers an opportunity for companies being monitored to present their take on the implementation of laws.
Key Takeaways
For Businesses:
- Adopt risk management frameworks and automations to evaluate and align your AI systems with new state regulations. Such tools and frameworks should be procured or built with an eye towards identifying risk areas in AI applications and ensuring compliance with the latest legislative changes. Â
- Make trust in AI an objective measure built around efficacy, bias, privacy, robustness, and explainability. Optimize and regularly measure these metrics from deployment through maintenance of AI systems.
- Seek out cross-sector collaboration that has already been scaffolded in many of the latest legislations. Provide data and expert perspectives to ensure alignment with the spirit of regulation and support the chance to weigh in on the final form of the legislation itself.
Lay the groundwork for state-by-state AI regulation
Is your organization at the forefront of integrating AI technologies, especially in sectors like insurance, healthcare, or public administration? With the regulatory landscape rapidly evolving, it's crucial to stay ahead of the curve.
Lawmakers and regulators, from California, to New York, to Florida, are intensifying their focus on AI regulation. This trend is not just confined to specific industries but is becoming a widespread imperative.
At Holistic AI, we understand the complexities and challenges of complying with these diverse and ever-changing regulations. Our Governance, Risk, and Compliance Platform, coupled with a suite of innovative solutions, is designed to guide and support organizations like yours.
Trusted by global companies, our platform ensures that your AI systems are not only compliant but also optimized for maximum efficacy, adoption, and trust. Don't let regulatory challenges hinder your AI journey.
Take the proactive step today. Schedule a call with one of our AI policy experts and empower your organization to navigate the AI regulatory landscape with confidence and foresight. Let Holistic AI be your partner in achieving compliant, efficient, and trusted AI systems.