🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

International joint Guidance Published on Deploying AI Systems Securely

Authored by
Nikitha Anand
Policy Analyst at Holistic AI
Published on
Apr 16, 2024
share this
International joint Guidance Published on Deploying AI Systems Securely

On 15 April 2024, the American National Security Agency’s Artificial Intelligence Security Center (NSA AISC) released a joint guidance on Deploying AI Systems Securely in collaboration with the Cybersecurity & Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ASD ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC-UK).

The guidance expands on previously released information sheets by the CISA on  Guidelines for secure AI system development and Engaging with Artificial Intelligence. It also aligns with CISA’s Cross-Sector Cybersecurity Performance Goals and the National Institute of Standards and Technology's (NIST) Cybersecurity Framework (CSF).

In their guidance, the international agencies advise organizations that deploy AI systems to necessitate robust security measures to prevent both AI system misuse and theft of sensitive data to effectively create systems that are secure by design. Specifically, the guidance on Deploying AI Systems Securely suggests best practices for deploying and using externally developed AI systems and aims to:

  • Improve the confidentiality, integrity, and availability of AI systems. 
  • Ensure there are appropriate mitigations for known vulnerabilities in AI systems.
  • Provide methodologies and controls to protect, detect, and respond to malicious activity against AI systems and related data and services.

What are the best practices recommended by the guidance on Deploying AI Systems Securely?

There are three overarching best practices recommended by the international guidance on secure AI systems:

  • Secure the deployment environment - Since organizations typically deploy AI systems within existing IT infrastructure, they should ensure that the IT environment has sound security principles such as robust governance, a well-designed architecture, and secure configurations. In particular, organizations should:
    • Work with the IT department to ensure that the deployment environment meets the organization’s standards and require a threat model from AI developers.
    • Ensure that boundary protections between the IT environment and AI system are robust and that there is a list of protected data sources to protect against data poisoning.
    • Apply security best practices such as implementing strong authentication mechanisms and multifactor authentication to secure sensitive AI information.
    • Deploy robust, tested cybersecurity systems to identify attempts to gain unauthorized access and a mechanism to block access from suspected malicious actors.
  • Continuously protect the AI system - To minimize vulnerabilities, it is crucial to implement secure practices at all points of development and deployment. Accordingly, organizations should:
    • Validate the AI system before and during use through various ways such as cryptographic methods, digital signatures, and checksums to protect sensitive information from unauthorized access during AI processes. They should also store all code safely, conduct thorough model testing, and evaluate and secure the supply chain.
    • Secure exposed APIs through authentication and authorization mechanisms, as well as validate input data to reduce the risk of undesired input being passed to the AI system.
    • Continuously monitor model behaviour for unauthorized access, unexpected modifications, or illicit attempts to access data.
    • Protect model weights through hardening access interfaces, strengthening hardware protections, and isolating weight storage.
  • Secure AI operation and maintenance - AI systems must be deployed following organization-approved processes to ensure cohesion and a unified control environment. Particularly, organizations must:
    • Enforce strict access controls to prevent unauthorized access or tampering with the AI systems.
    • Ensure user awareness and training on security best practices to promote a security-aware culture and minimize the risk of human error.
    • Conduct external audits and penetration testing to identify vulnerabilities and weaknesses.
    • Implement robust logging and monitoring to detect abnormal behavior or potential security incidents, and establish alert systems to notify administrators to ensure the safeguarding of AI systems.
    • Implement regular updates and patches prior to releasing new or updated versions of models.
    • Prepare for high availability and disaster recovery to ensure that system data, especially log data, cannot be changed. This must be done by using immutable backup storage systems.
    • Plan secure delete capabilities of key data and model components at the end of any processes where such data and components have been exposed or accessible.

Who does the guidance on Deploying AI Systems Securely apply to?

While the joint guidelines are voluntary, CISA encourages that this be adapted as necessary and applied by all institutions that deploy or use externally developed AI systems.

However, the guidance on Deploying AI Systems Securely is not applicable to organizations that do not deploy AI systems themselves and instead leverage AI systems deployed by others.

Prioritize Compliance

The joint statement highlights the growing importance that governments are placing on making AI systems safer, as well as the push towards international cooperation towards trust in AI. This joint statement comes on the heels of one released by US federal agencies on the use of automated systems, which reinforced the applicability of existing laws to automated systems and the importance of ensuring that the development of such systems happens in accordance with these laws.

Compliance is vital to uphold trust and innovate with AI safely. To find out how Holistic AI can help you get your algorithms legally compliant, get in touch at we@holisticai.com.

Download our comments here

DISCLAIMER: This news article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo