AI Governance: What You Need to Know

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Published on
May 30, 2024
read time
0
min read
share this
AI Governance: What You Need to Know

Recent years have seen an exponential increase in the adoption of AI across business, government, and society, with the technology now becoming a competitive necessity to streamline operations, future-proof business processes, and accelerate innovation. In this transformational epoch, it is essential that the benefits of AI are harnessed, while its risks and harms are mitigated.

Indeed, we have seen a plethora of AI incidents and harms already, with bias in recruitment, insurance, and credit scoring, and companies even going bust due to their algorithms. There is also a slew of legal action against companies that have failed to use AI with the appropriate safeguards, or in a way that is compliant with existing laws.

With up to 74% of organizations failing to take steps to ensure their AI is safe and trustworthy, the technology can quickly become a liability that poses significant legal, reputational, and financial risks – but it doesn’t have to be this way. AI governance can help to ensure the guardrails are in place to reduce harm and unforeseen risk. In this blog post, we provide an overview of AI governance best practices and their benefits.

What is AI Governance?

AI governance is the many technical and non-technical guardrails and tools that make AI safer, secure, and more ethical.

Technical: AI Governance

On the technical side, this includes the quantitative evaluation of an AI system to measure its performance using appropriate metrics and benchmarks that are context-specific. Key aspects of a system that can be measured are:

  • Bias – whether a system treats individuals or groups unfairly or results in disproportionately different outputs for different subgroups that are unjustified.
  • Efficacy – whether a system underperforms relative to its intended use case, particularly on unseen data.
  • Robustness – if a system displays vulnerabilities or fails in response to changes in datasets or adversarial attacks by malicious actors.
  • Privacy – if a system is sensitive to personal or critical data leakage or facilitates the reverse engineering of such data.
  • Explainability – whether the AI system is understandable to users and developers and supported by the effective communication of important information.
Non-Technical: AI Governance

On the non-technical side, AI governance refers to frameworks, policies, and procedures that reduce the risk of AI systems. This includes ongoing monitoring procedures, mechanisms for human oversight, documentation throughout the lifecycle of the AI system, procedures for updating documentation, clear lines of accountability and liability, and mechanisms for communication and transparency with appropriate stakeholders.

AI governance also encompasses compliance with relevant laws, both AI-specific laws like the EU AI Act, as well as other general laws that apply to the specific deployment in a particular jurisdiction. Moreover, compliance should not be thought of in just legal terms, but also in terms of compliance with risk management frameworks. These can be anything from organizational frameworks to recognized voluntary frameworks such as NIST's AI Risk Management Framework (AI RMF).

AI governance efforts can also be extended through meeting national and international standards for AI, such as the ISO/IEC 42001:2023 AI Standard on Establishing, Maintaining and Improving AI Management Systems.

Do all AI applications need AI governance?

AI Governance is vital for applications of AI with significant social implications, such as healthcare, employment, finance, law enforcement, and immigration. These applications are often seen as high-risk and are increasingly being targeted by AI legislation and regulation due to the significant impact these decisions can have on society and an individual’s life chances.

However, AI governance is important for all applications of AI, even if they do not have a societal impact. Take AI-driven marketing tools for example – they are unlikely to have a significant societal impact compared to other high-risk applications of AI like biometric systems, but without AI governance, they pose a significant financial risk if they go wrong as they have a direct link to commercial leads and revenue. Likewise, poor brand positioning can pose reputational risks, which could lead to further financial impact, and could even pose a legal risk if used for deceptive trade practices, for example.

Similarly, logistics systems and many other seemingly low-risk applications of AI can have significant business implications if they go wrong or reduce trust in the brand. In short, AI governance is essential for all AI applications to ensure the technology can be harnessed for innovation and business benefits – so that it can be an asset, not a liability.

What are the benefits of AI governance?

The Benefits of AI Governance

AI governance has a number of benefits for organizations, no matter whether their AI systems are conventionally considered as high-risk or not. These include:

  • Better visibility over AI deployments – to implement AI governance, you first need to know what AI you are using where. By creating and maintaining an AI inventory to keep track of AI projects in real-time – no matter where they are in the lifecycle – you can help avoid duplicating efforts, ensure no systems are overlooked, and better allocate resources.
  • Reduce risk – by complying with relevant laws, abiding by risk management frameworks, and meeting relevant standards, the risk of harm is reduced through proactive mitigation and anticipation. The likelihood of unforeseen harm is also reduced, meaning you are less likely to be taken by surprise.
  • Better performance - by continuously monitoring your AI systems, you can easily spot signs that something isn’t quite right and take action, ensuring that systems are continuously operating effectively and safely.
  • Increased trust – by taking steps to manage the risks of AI and ensure the appropriate guardrails are in place, you can increase trust in your systems, both internally and from consumers. This helps you to innovate more confidently and harness AI’s full potential with the assurance that unexpected risks won’t catch you out.

AI governance is an ongoing process

It is never too early or too late to implement AI governance – it can be incorporated throughout the lifecycle of an AI system, from design to deployment.

During the design and development phases, AI governance can help to ensure that there are built-in capabilities for monitoring and mechanisms for compliance, as well as features that increase the accessibility of the system to maximize opportunities for participation.

During deployment, AI governance can help to maximize compliance and ensure that the maximum value is derived from the system in a way that ensures user safety and clearly communicates the capabilities and limitations of the system.

During retirement, AI governance can help to ensure that related systems will not be negatively impacted and that all retirement processes are followed, including the updating of documentation.

AI governance, therefore, is not a one-time exercise – it requires ongoing monitoring and review to ensure that the system remains safe and ethical throughout its lifetime. As such, systems should be regularly evaluated, particularly after major system updates or changes in legislation to ensure the status of the system has not changed.

AI Governance with Holistic AI

Holistic AI’s Governance platform is a world-leading 360 solution for AI trust and safety. Take command of your AI ecosystem with our all-in-one solution for inventory management, risk posture reporting, and compliance. Using cutting-edge generative AI tools? Not a problem - our Governance Platform can take care of that too.

Schedule a demo with our experts to find out more about how Holistic AI can support your AI governance efforts.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Take command of your AI ecosystem

Learn more
Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo