Artificial intelligence (AI) use has grown rapidly in the last few years, with 44% of businesses taking steps to integrate it into their current processes and applications. However, while AI can offer many business benefits, such as increased productivity, accuracy, and cost savings, using AI comes with risks. Consequently, steps must be taken to reduce these risks and promote AI's safe and trustworthy use. An effective way to do this is to introduce governance mechanisms or codify risk management requirements in the law. Accordingly, policymakers worldwide have begun to propose regulations to make AI systems safer for those using them.
While many of these efforts target AI applications by businesses, governments are also starting to use AI more widely, with almost 150 significant federal departments, agencies, and sub-agencies in the US government using AI to support their activities. As such, governmental use of AI is also starting to be targeted, with initiatives to govern the use of AI in the public sector increasingly being proposed. In this blog post, we provide a high-level summary of some of the actions taken to regulate the use of AI in public law, focusing on the US, UK, and EU, first outlining the different ways governments use AI.
AI is increasingly being used by governmental departments and agencies, and other entities in the public sector to automate a variety of tasks, from virtual assistant bots to deliver reminders about pregnancy checkups to mapping the characteristics of businesses in different areas to direct investments towards more ventures that are likely to be more successful. Elsewhere, AI is being used in defence activities to enhance decision-making, increase safety, and predict supply and demand, with the US Department of Defense publishing an AI strategy to accelerate the applications of AI in the military and the US Defence Advanced Research Projects Agency (DARPA) funding a program to develop a brain-to-machine interface.
However, highlighting the potential harms that can come from the use of AI in the public sector, the UK’s Office of Qualifications and Examinations Regulation (Ofqual) came under fire in 2020 for its algorithm used to assign GCSE grades while students were unable to take exams due to COVID restrictions since many students received lower grades than expected. Further, an already controversial application of AI, facial recognition, is being used by law enforcement to identify suspects and has recently garnered much attention due to the wrongful arrest of a man in Georgia who was mistaken for a fugitive by Louisiana authorities’ racial recognition technology. With the Gender Shades project revealing the inaccuracies of facial recognition technology for darker-skinned individuals and both the victim and fugitive being Black, this highlights the need to ensure that AI systems, particularly those used in high-risk contexts, are not biased and are accurate for all subgroups. As such, the UK’s Equality and Human Rights Commission has called for suspending facial recognition in policing in England and Wales, with similar action being taken in Washington’s city of Bellingham and Alabama.
Given that AI is increasingly being used in high-stakes applications in the public sector and several instances of harm have resulted from this, efforts are emerging to govern and regulate public sector applications of AI, with many being centred in the US.
Most recently, the US Department of State published a declaration on the Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which outlines 12 best practices for states using AI and automation in their military practices. These include maintaining human control, using auditable methodologies and design considerations, rigorous testing and assurance across the AI life cycle, and sufficient training for the personnel approving or using military AI capabilities.
On the note of personnel training, the US has launched an imitative specifically targeting the training of federal agency personnel acquiring AI. Signed into law in October 2022, the AI training Act (Public Law No. 117-207) requires the Director of the Office of Management and Budget to develop an AI training program for the acquisition workforce. Specifically, this program will be designed for employees of an executive agency who are responsible for program management; planning, research, development, engineering, testing, and evaluation of systems; procurement and contraction; logistics; or cost estimation of AI to ensure that such personnel know the risks and capabilities of the AI systems they are responsible for procuring. Taking a risk-management approach, the topics to be covered by the training include the science of AI and how it works, technological features of AI systems, how AI can benefit the federal government, AI risks, including discrimination and privacy risks, methods to mitigate risks including ensuring that AI is safe, reliable, and trustworthy, and future trends in AI.
This effort builds Executive Order (EO) 13960, Promoting the Use of Trustworthy AI in the Federal Government, signed into law in December 2020. The EO sets out a series of principles that federal agencies must be guided by when considering the design, development, acquisition, and use of AI in Government:
As part of this Executive Order, the National Institute for Standards and Technology (NIST) will re-evaluate and assess AI used by federal agencies to investigate compliance with these principles. In preparation, the US Department of Health and Human Services has already created its inventory of AI use cases.
At a more local level, and using different terminology, a report by the New York City Automated Decision Systems (ADS) Task Force in November 2019. Convened by Mayor Bill de Blasio in 2018 as part of Local Law 49, which required the Task Force to provide recommendations on six topics related to the use of ADSs by City agencies, the Task Force examined three key areas as part of their report:
Recommendations included establishing an Organizational Structure within City government to act as a centralised resource to guide agency management of ADSs, including the inclusion of principles such as fairness and transparency, providing sufficient funding and training to agencies to support the appropriate use of ADSs, staff education and training, support for public requests for information about City use of ADSs, establishing a framework for agency reporting of information about ADSs, and creating a process for assessing ADS risks.
Following this report, Mayor de Blasio signed Executive Order 50 to establish an Algorithms Management and Policy Officer within the Mayor’s Office of Operations. The aim of this was to establish a centralised resource on algorithm policy and develop guidelines and best practices to assist City agencies using algorithms.
In Maryland, the Algorithmic Decision Systems Procurement and Discriminatory Act was proposed in February 2021 to require that if a state unit purchases a product or service that includes an algorithmic decision system, it must adhere to responsible AI standards. They must also evaluate the system's impact and potential risks, paying particular attention to potential discrimination. Further, state units must ensure the system adheres to transparency commitments, including disclosing the system's capabilities, limitations, and potential problems to the state.
While the UK has not introduced any laws regulating public sector use of AI, reflecting the lack of more general AI-specific legislation in the UK, the Central Digital and Data Office and Office for Artificial Intelligence published guidance on building and using AI in the public sector on 10 June 2019. While brief, the guidance provides resources on assessing whether using AI will help achieve user needs, how AI can best be used in the public sector, and how to implement AI ethically, fairly, and safely.
Citing guidance from the Government Digital Service (GDS) and Office for Artificial Intelligence (OAI), the publication provides four resources on assessing, planning, and managing AI in the public sector. The publication then provides a resource on using AI ethically and safely, co-developed with the Turing institute, before providing a series of case studies on how AI is being applied in the public sector, from satellite images being used to estimate populations to using AI to compare prison reports. Therefore, instead of comprehensive guidance principles being outlined, which is more characteristic of the US approach, the UK guidance acts as a resource bank.
With a more comprehensive approach, the Guidelines for AI procurement, co-published by the Department for Business, Energy & Industrial Strategy, Department for Digital, Culture, Media & Sport, and Office for Artificial Intelligence in June 2020 is aimed at central government departments that are considering the suitability of AI technology. Specifically, the document outlines guiding principles on how government departments should buy AI technology and insights on tackling any challenges that may arise during procurement.
Initiated by the World Economic Forum’s Unlocking Public Sector AI project, the guidelines were produced with insights from the World Economic Forum Centre for the Fourth Industrial Revolution and other government bodies and industry and academic stakeholders.
The guidelines then address AI-specific considerations within the procurement process concerning preparation and planning; publication; selection, evaluation and reward; and contract implementation and ongoing management.
While much of the European Commission’s resources are currently invested in the development of the EU AI Act, and the EU is focusing more on businesses using AI, individual member states are introducing their own initiatives to address government use of AI.
For example, in the Netherlands, the Dutch Secretary of the State of Digital Affairs announced the launch of an Algorithm Registry in 2022.
Here, the AI applications currently being used by the Dutch government are listed, with 109 registries currently. Applications can be filtered by government branch, and the database provides detail on the type of algorithm being used, whether it is currently actively used, and the policy area it is used for. Information about monitoring, human intervention, risks, and performance standards are also provided, increasing transparency of AI usage by the Dutch government.
At a more local level, the City of Amsterdam and Helsinki launched an Algorithm and AI register in September 2020. Providing information about the three algorithms used in the City of Amsterdam, the register provides an overview of each system and contact information for the department responsible, along with information on the data, data processing, non-discrimination approach, human oversight, and risk management associated with the system.
Elsewhere, in Italy, a Task Force on Artificial Intelligence was established as part of the agency for Digital Italy to develop Italy’s strategy for AI. In March 2018, the Italian government published a report, edited by Task Force, addressing various methods of adopting AI technology into public policies. This report, referred to as the White Paper, discussed and identified nine challenges to be addressed in the country’s National AI Strategy:
In order to address these challenges, the report gives 10 recommendations:
Governments and businesses alike will soon be faced with several requirements and principles that they must follow when designing, developing, deploying, and procuring AI systems. Taking action early is the best way to ensure compliance. To find out more about how Holistic AI can help you with this, get in touch at we@holisticai.com.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts