Australia’s Interim Response on Safe and Responsible AI

January 31, 2024
Authored by
Hande Yuksel Sen
Legal Research Intern at Holistic AI
Australia’s Interim Response on Safe and Responsible AI

There have been a range of guidance documents and lawsuits related to AI in Australia, including a class action settlement of $1.7 billion made by the Australian government. The Department of Industry, Innovation, and Science published a discussion paper on Australia’s AI Ethics Framework in 2019 and an AI Action Plan in 2020. More recently, on 17 January 2024, the Australian Government published an interim response addressing concerns and recommendations related to the regulation of artificial intelligence (AI) in the country. This response signals an intent to stay up to date with the opportunities and challenges associated with AI technologies.

High-Risk AI System Focus in Australia

The interim response is a strategic move by the Australian Government to bridge regulatory gaps in the rapidly evolving landscape of AI. While recognizing the positive impact of many low-risk AI applications, the government emphasizes the need for a robust regulatory framework to address potential harms caused by high-risk AI systems (such as those detailed in the EU AI Act), operating at unprecedented speed and scale. The response defines ‘high risk” as “systemic, irreversible or perpetual’. Examples include ‘the use of AI-enabled robots for medical surgery’ and ‘the use of AI in self-driving cars to make real-time decisions.

The response also mentions that the EU's Artificial Intelligence Act adopts a list-based approach, identifying specific AI uses deemed high-risk effects on the safety and rights of individuals. This includes use cases like critical infrastructure, medical devices, systems in education and recruitment, law enforcement tools, border control, administration of justice, biometric identification, and emotion recognition. It also mentions that in Canada's proposed AI legislation, the definition of a 'high-impact system' can be prescribed by regulation, and accompanying documents outline principles to assess if an AI system qualifies, considering potential harm to individuals' safety or rights.

Consultations and Stakeholder Involvement

The government engaged in extensive consultations from June 1 to August 4, 2023, seeking input from diverse stakeholders, including the public, advocacy groups, academia, industry, legal firms, and government agencies and consulted on its Safe and responsible AI in Australia discussion paper. These consultations include 3 Ministerial roundtables with 64 participants, 1 virtual town hall event with 345 participants, 4 in-person roundtables with 59 participants, 4 virtual roundtables with 81 participants and 510 online submissions from the public.

The submissions expressed excitement about AI's potential benefits in areas like healthcare, education, and productivity; however, concerns were raised about potential harms throughout its development. Examples include breaches of intellectual property laws during data collection, biases impacting model outputs, environmental impacts during training, and competition issues affecting consumers. Outputs may lead to individual harms like discrimination and systemic risks compromising political and social cohesion. These potential harms are leading to a consensus that regulatory guardrails are essential, especially for high-risk AI applications.

Key Principles Guiding Australia’s Interim Response

The interim response is guided by several key principles:

  1. ‘Risk-Based Approach’: The government acknowledges the need for a risk-based framework to support the safe use of AI, tailoring obligations based on the level of risk posed by AI applications.
  1. ‘Balanced and Proportionate’ Regulation: Emphasizing the avoidance of unnecessary burdens, the government aims to strike a balance between fostering innovation and protecting community interests, including privacy and security.
  1. ‘Collaborative and Transparent’ Engagement: The Australian Government commits to open engagement, collaborating with experts, industry, academia, and the public to ensure a transparent and inclusive regulatory approach.
  1. ‘Trusted International Partnership’: Australia aligns with the Bletchley Declaration, underlining its commitment to international collaboration to address AI risks, particularly in the realm of frontier AI.
  1. Community-Centric Approach: Placing people and communities at the core, the government aims to ensure that AI is developed and deployed considering the needs, abilities, and social context of all individuals.

What Measures Were Proposed By Australia’s Interim Response?

Building on the five principles above and insights from the consultation, the Australian Government set out some key specific measures that it proposes to take in its interim response:

  • Mandatory Guardrails: The government is considering introducing mandatory obligations on those developing or using AI systems in high-risk settings to ensure their safety. These obligations for high-risk settings include testing products before and after release, labeling of AI systems in use or watermarking of AI-generated content and training for developers and deployers.
  • Testing, Transparency, and Accountability: Proposals include internal and external testing of AI systems, sharing best practices for safety, and ongoing auditing. Transparency initiatives involve informing users when AI is used, public reporting on AI system limitations, and accountability measures.
  • AI Safety Standard: Collaboration with industry aims to develop a best-practice and up-to-date voluntary AI risk-based safety framework, providing a practical toolkit for responsible AI adoption.
  • Temporary Expert Advisory Group: The establishment of an interim expert advisory group demonstrates the government's commitment to informed decision-making regarding AI regulations.

Australia’s AI Guidance Leans into International Collaboration

The Australian Government is actively participating in global AI initiatives, including the Bletchley Declaration and collaborations with international partners. It is closely monitoring global developments, such as the EU's Artificial Intelligence Act, the US executive order on AI, and Canada's voluntary AI code of conduct.

Australia’s Focus on Maximizing Benefits From AI

By allocating $75.7 million (AUD) in funding for AI initiatives, the government aims to support the adoption and development of AI technologies. This includes:

  • $17 million for the AI Adopt program (a program that will establish new centers to assist small and medium-sized enterprises (SMEs) by providing support and training, enabling them to make informed decisions on leveraging AI for business improvement.)
  • $21.6 million to expand the National AI Centre's research and leadership role and
  • $34.5million for ongoing funding for AI and Emerging Technologies Graduates programs (which will aim to attract and train the next generation of job-ready AI specialists.)

It is mentioned in the response that private investment in AI in 2022 reached $1.9 billion, totalling $4.4 billion since 2013. The Australian Government will explore opportunities, including an AI Investment Plan, to further support AI adoption while ensuring necessary safeguards for trust and confidence.

Preparing for Trustworthy AI in Australia

To conclude, the Australian Government's interim response to safe and responsible AI reflects a balanced and forward-looking approach. The emphasis on collaboration, transparency, and international cooperation underscores Australia's commitment to addressing the challenges posed by AI while maximizing its potential benefits. This interim response serves as a crucial step towards establishing a comprehensive regulatory framework for the evolving AI landscape in Australia.

Request a demo to explore how the world’s first AI readiness and governance platform can support your initiatives and help you adopt AI at scale safely.

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call