While guidelines may not carry the same legal weight as legislation, their influence is undeniable, especially when major markets like the US and UK align on a framework that could potentially shape future laws. In November 2023, this became a reality as the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) released joint Guidelines for Secure AI System Development.
This collaboration is crucial in a landscape where recent polls have shown that AI plays a role in 85% of recent cyber attacks. As organizations evolve their AI governance, aligning with these guidelines is not only a strategic move in cybersecurity but also a proactive step in anticipation of possible regulatory requirements to all AI systems.
Despite the unpredictable nature of future legislation and the divergent paths historically taken by the US and UK in AI regulation, this joint stance on AI security signals a convergence of priorities. This guide aims to demonstrate why CTOs, CISOs, and CDOs should prioritize these guidelines to support enhanced governance, standardization, and ensuing efficacy across AI initiatives.
On November 26th, 2023, the U.S. Department of Homeland Security’s (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) published joint Guidelines for Secure AI System Development in cybersecurity.
The Guidelines define AI as applications that utilize machine learning – software that learns from data on its own, without needing specific instructions. These tools can then use what they've learned to make predictions, suggestions, or even decisions based on patterns and trends they've discovered using statistical reasoning.
In short, everyone involved with AI, including vendors, internal tool builders, and end users. This may seem like a bit of an overreach. But the Guidelines provide some justification.
Responsibility for AI safety can get tricky. Supply chains are often complex, with multiple companies involved. This can blur the lines of who's responsible for keeping the AI secure.
Additionally, the guidelines are intentionally addressed to support transparency and safety for a wide variety of stakeholders. In the words of CISA’s announcement on the Guidelines:
“The Guidelines apply to all types of AI systems, not just frontier models. We provide suggestions and mitigations that will help data scientists, developers, managers, decision-makers, and risk owners make informed decisions about the secure design, model development, system development, deployment, and operation of their machine learning AI systems.”
The Guidelines suggest a "secure-by-design" approach. This means providers take the lead in securing the AI system, even if it's used by others. Think of it like building a safe car, even if someone else drives it. Additionally, the Guidelines note that for ongoing security and safety purposes, end users and their education play a role as well. Providers should be upfront with users about the risks and how to use the AI safely.
Like a range of other AI standards, the Guidelines recognize not all AI is equally risky. Systems that could pose harm to people and reputations, or leak sensitive information, should be treated as “critical” with greater attention to security.
The Guidelines break down suggestions into AI system design, development, deployment, operations, and maintenance. We’ll take a deeper dive into the precise recommendations below. At a high level, suggestions fall into the following initiatives:
Notwithstanding the recent publication of G7 International Guiding Principles and Code of Conduct on Governing Advanced AI systems, the joint CISA and NCSC publication is one of the first international initiatives on AI governance. It could pave the way for increased global cooperation and standardization in the regulation of AI. It’s also one of the first nationwide pieces of guidance in the US.
Additionally, high-profile frameworks are often pulled from to create standards and legally binding legislation. Most large organizations operate in the United States, United Kingdom, or both, and could de-risk future AI program growth by aligning processes with the Guidelines today.
The cyber security-focus of the document creates a golden opportunity for organizations with established cybersecurity postures to leverage their existing infrastructure and accelerate AI governance. By embracing robust practices, they can not only minimize risk (bias, vulnerabilities, explainability) but also enhance outcomes and boost alignment across security and AI operations including:
Below we’ll work through the recommendations presented in the Guidelines by AI product lifecycle stage.
As AI systems become increasingly complex and integrated into critical applications, the potential for cyberattacks, biases, and unintended consequences grows. To mitigate these risks, secure design principles are essential from the very beginning of the development process.
Building out of design, development of complex systems leads to many trade-offs and decisions with long lasting impact. To mitigate future risk (and security risk attached to early versions of the system), the joint statement urges the following in AI system development:
While there are security implications to nearly any software deployment, AI – and particularly generative AI systems – require the consideration of additional security risk factors. The Guidelines outline the following focus areas through AI system deployment:
The joint guidelines finally consider secure operation and maintenance through four brief considerations:
As data and tech leaders begin to lay the foundation for safety in AI design, development, deployment, and maintenance, aligning around widely accepted guidelines in your key markets is a logical starting point. But it’s just that. The true benefits of trustworthy AI move beyond circumvention of legal, financial and reputational damage and extend to more reliable AI impact: through efficacy, internal trust, and lack of down time.
AI risk management is now a competitive necessity that early evangelists are beginning to perfect. As the only provider of a 360-degree AI governance, risk, and compliance platform, Holistic is enabling many of these leading teams. Want to chat through your AI initiatives with one of our policy and ML experts? Schedule a free consultation today.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts