Executive Order 14110 on Artificial Intelligence: What You Need to Know

November 13, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Executive Order 14110 on Artificial Intelligence: What You Need to Know

Executive order 14110 (document 75193) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was signed by US president Joe Biden on 30 October 2023.

The executive order seeks to promote responsible AI use to avoid outcomes such as bias and discrimination while simultaneously encouraging innovation and competition.

It also aims to promote the safety and security of AI, foster innovation and competition, support workers, advance equality and civil rights, protect consumers, patients, passengers, and students, defend privacy, advance federal government use of AI, and strengthen American leadership abroad.

How is AI defined by Executive Order 14110?

The executive order defines AI as:

“a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”

The first half of this definition in particular is aligned with the OECD definition of AI, with which the EU AI Act has also converged.

On the other hand, an AI system is defined as “any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI” while an AI model is “a component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.”

What is the role of the National Institute of Standards and Technology?

Following on from the release of the AI Risk Management Framework 1.0 (AI RMF 1.0), the executive order sets out some key responsibilities for the National Institute of Standards and Technology (NIST) to support the advancement of responsible AI in the US. These responsibilities must be fulfilled within 270 days of the order’s publication.

In particular, the director of NIST is required to work with the Secretary of Energy, the Secretary of Homeland Security, and the heads of other relevant agencies to establish guidelines and best practices for trustworthy AI systems. This should include companion resources that extend the AI RMF and Secure Software Development Framework to generative AI and benchmarks for evaluating and auditing AI capabilities, particularly with respect to cybersecurity and biosecurity.

NIST will also be required to establish guidelines for red-teaming tests to ensure that AI systems are robust and that testing environments are available to support the safe, secure, and trustworthy development of AI.

What are the requirements for dual-use foundation models?

Dual use foundation models are defined by the executive order as AI models trained on broad data that generally use self-supervision and contain tens of billions of parameters. These models are applicable across a range of contexts and either exhibit or could be modified to exhibit risks to security, the economy, and health and safety by lowering the barrier for non-experts to access or design chemical, biological, radiological, or nuclear (CBRN) weapons, identify vulnerabilities for cyber attacks, or evade human control or oversight. Even if models are provided to users with technical safeguards, they still meet the definition.

Within 90 days of the executive order, companies that develop dual-use foundation models for the Federal Government must provide information on any ongoing or planned training, development, or production of foundation models, including physical and cybersecurity protections. They must also provide information on the ownership and possession of model weights as well as the results of performance in red-team testing based on NIST’s guidance and any measures taken to meet safety objectives.

Where the weights for these models are publicly accessible, consultations will be held to consider the potential risks and benefits of making this information available as well as potential voluntary and regulatory mechanisms to manage the risks and maximize the benefits of the models.

What are the requirements for Infrastructure as a Service products?

To address the use of US Infrastructure as a Service (IaaS) products by foreign malicious cyber actors, the Secretary of Commerce must propose relations within 90 days of the order that require IaaS providers to submit a report when a foreign person interacts with the provider to train a large AI model that has the potential to be used for malicious cyber-enabled activity.

The regulation must also prohibit any foreign reseller of a US IaaS product without submitting a report detailing each instance where a foreign person interacts with the foreign reseller. Finally, the Secretary of commerce must determine the technical conditions of a large AI models to have potential capabilities that would facilitate malicious cyber-enabled activity. However, until this determination is made, a model is considered to have the potential capabilities if it required computing power greater than 1026 integer or floating-point operation and is trained on a computing cluster that has a data center networking of over 100 Gbit/s and a theoretical maximum compute capacity of 1020 integer or floating-point operations per second for training AI.  

Additionally, within 180 days of the executive order, the Secretary of Commerce must propose regulations that require US IaaS providers to ensure that foreign resellers verify the identity of any foreign person that obtains an IaaS account from the foreign reseller, including name and identity, means of payment, address, and IP addresses.

How does the executive order address synthetic content?

To identify synthetic content, or content produced using generative AI tools, produced by the Federal Government or on its behalf, within 240 days of the executive order, the Secretary of Commerce must consult with other relevant agencies to submit a report to the Director of OMB and the Assistant to the President for National Security Affairs that identifies the existing standards, tools, methods, and practices for disclosing generated content. This includes ways to authenticate content and track provenance, label synthetic content, detect synthetic content, and prevent generative AI from producing child sexual abuse material or non-consensual intimate imagery of real people.

Provisions should also be outlined for testing software designed for any of these purposes as well as auditing and maintaining synthetic content, and guidance must be issued to agencies for labeling and authenticating synthetic content that they produce or publish.

What is the National AI Research Resource?

To support public-private partnerships for promoting AI innovation, the executive order directs the Director of NSF to launch a pilot program to implement the National AI Research Resource (NAIRR) in line with previous recommendations of the NAIRR Task Force.

The purpose of the pilot program is to explore the infrastructure governance mechanisms, and user mechanisms needed for distributed computational, data, model, and training resources to be made available to research communities.

In particular, the Director of NSF will be required to identify Federal and private sector resources needed for the pilot program, where heads of identified agencies must submit reports identifying the agency resources that could be developed and integrated into the pilot program, as well as the risks and benefits of the inclusion of that agency in the program.

How does the executive order deal with IP?

To clarify issues surrounding AI and inventorship, the Director of the United States Patent and Trademark Office (USPTO) must publish guidance to patent examiners and applicants surrounding inventorship and AI, including generative AI with examples of AI systems playing different roles in inventive processes. This must be issued within 120 days of the executive order.

The United States Copyright Office of the Library of Congress will also publish a study on copyright issues raised by AI, after which the USPTO Director must consult with the Copyright Office to issue recommendations to the President on potential executive actions relating to copyright and AI.

Within 180 days of the executive order, the Secretary of Homeland Security in consultation with the Attorney General must develop a training, analysis, and evaluation program to mitigate AI-related IP risks. This training program must include personnel dedicated to investigating AI-related IP theft, a policy of sharing information and coordinating with the FBI, Customers and Border Protection and other agencies, guidance to assist private sector actors with mitigating AI-driven IP theft risks, and mechanisms to share information with AI developers and law enforcement to identify incidents and inform stakeholders of legal requirements.

How does the executive order advance equity and civil rights?

The executive order places importance on ensuring that AI systems do not result in unlawful discrimination and other harms that may affect an individual’s life chances.

In particular, it focuses on the use of AI in the criminal justice system, as well as for administering and evaluating eligibility for government benefits and programs. As such, federal agencies using AI systems in these contexts will be required to comply with guidance on Federal Government use of AI in order to prevent harm from government use of AI.

The executive order also calls for increased training and technical assistance to ensure that those using AI systems for critical applications have the necessary levels of competence.

The executive order also calls for streamlining of visa processes or immigrants with expertise in AI to advance the US’s position as a leader in AI and support innovation. Specifically, this aims to support those who seek to immigrate to the US to study, work on, or research AI through making more visa appointments available, the process easier to navigate, and considering a visa renewal program.

How does the AI executive order address healthcare?

Another key application of AI focused on by the executive order is healthcare. The order directs the development of a strategic plan with policies and frameworks on responsible AI in the health and human services sector.

This plan must target the development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing, performance monitoring of AI used in the health and human services sector, equity principles in AI-enabled technologies used in these sectors, and documentation for the appropriate and safe use of AI within these applications.

The Secretary of HHR must also work to develop an AI assurance policy to evaluate important aspects of the performance of AI-enabled healthcare tools to support pre- and post-market oversight. Furthermore, guidance must be provided on compliance with Federal nondiscrimination and privacy laws by health and human services providers that receive Federal financial assistance to ensure that the use of AI in these critical contexts does not result in bias or discrimination or put sensitive information at risk.

Governing government use of AI

To provide the Federal Government guidance on the use of AI and ensure that each agency’s use of AI is responsible and coordinated, each government agency is required to appoint a Chief Artificial Intelligence Officer to promote innovation and manage the risk of their agency’s use of AI.

This person must be appointed within 60 days of the issuing of guidance provided to the Federal Government on the use of AI by the Director of OMB. Such guidance shall also specify the required minimum risk management practices for Government use of AI, including how principles from the Blueprint for an AI Bill of Rights and NIST’s AI RMF can be incorporated into practices to promote safety, transparency, and fairness.

The executive order also specifically focuses on the responsible use of generative AI by the federal government, where the order asserts that agencies should limit access as necessary to specific generative AI tools and establish guidelines and limitations on generative AI tools that are not blocked from use within that agency. To support efforts, agencies are also encouraged to provide training for staff on the safe use of the tools and considerations around providing the tools with Federal information.

A wave of rules is incoming – be prepared with Holistic AI

While the executive order doesn't have immediate consequences for AI developers and deployers in the US, it instructs various agencies to develop rules and encourages legislative proposals.

Some laws, like US S3205 mandating Federal Agencies to use the NIST AI RMF and The TEST AI Act of 2023 (US S3162) requiring NIST to establish AI testbeds, have already been proposed since the executive order has been issued.

The executive order is likely to have significant implications for the US in the coming months. Compliance is an ongoing journey and early preparation is key.

To find out how Holistic AI can support you, schedule a call with a member of our specialist team.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call