Ethical AI Down Under: Australia’s AI Framework and Action Plan

December 8, 2023
Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Ethical AI Down Under: Australia’s AI Framework and Action Plan

Policymakers around the world are taking markedly different approaches to promoting responsible AI, with the EU making significant progress with its trio of laws targeting AI and algorithms – the EU AI Act, Digital Services Act, and Digital Markets Act – the US introducing laws at the state, federal, and local levels, particularly targeting HR Tech and insurtech, and the UK taking a light-touch approach in comparison through white papers. China has also passed multiple laws regulating AI, particularly focusing on generative AI, and Brazil has so far introduced four laws, although the country is yet to see any of these progress out of Congress. On the other hand, the ecosystem is Australia is less mature, with the Australian Government only having published two resources to contribute to the regulatory ecosystem.

Australia’s AI Ethics Framework Discussion Paper

Published by the Australian Government’s Department of Industry, Innovation, and Science in 2019, the discussion paper on Australia’s AI Ethics Framework marked the opening of a public consultation on the eight proposed core principles needed for responsible AI in Australia. These are:

  1. Generates net benefits - The benefits of AI systems should outweigh the costs
  1. Do no harm - Civilian AI systems must not be designed to harm and should be implemented in ways that minimise negative outcomes
  1. Regulatory and legal compliance - AI systems must comply with all relevant local, state, and federal laws, regulations, and obligations
  1. Privacy protection - AI systems should ensure that private data is protected and data breaches are prevented
  1. Fairness - AI systems must not result in unfair discrimination and training data should be free from bias
  1. Transparency and explainability - Users must be informed when an algorithm is being used and how it makes decisions
  1. Contestability - There must be an efficient process to allow that person to challenge the use or output of the algorithm
  1. Accountability – those responsible for the development and deployment of AI should be identifiable and accountable for any impacts

Posing eight questions about the proposed principles, tools required for responsible AI adoption, and the existence of best practices, the discussion paper recognises the efforts from countries around the world that have published ethical guidance on AI, as well as efforts from entities such as Google and Microsoft.

The discussion paper also places a focus on the importance of having a human in the loop in increasing accountability and reducing harm. With its focus on preventing societal harm and promoting innovation to harness the social benefits of AI, the paper also discusses the need for society in the loop, where the end users of the technologies are adequately considered during the design and development processes to ensure that frameworks are actionable and will be effective when deployed in the real world.

Indeed, the discussion paper draws on a number of scandals and harms to illustrate the importance of ethical AI frameworks, including well-known cases such as Amazon’s scrapped recruitment tool and Northpointe’s COMPAS recidivism tool. As such, the discussion paper proposes a toolkit for preventing these risks, based on nine practices:

  1. Impact assessments should be used to determine the potential impacts of AI, including negative impacts on individuals, communities and groups, in order to inform mitigations
  1. Internal or external reviews should be carried out to ensure they comply with ethical principles and Australian policies and laws
  1. Risk assessments to classify systems by the level of risk associated with their deployment or use
  1. Best practice guidelines to guide AI developers and users across industries
  1. Industry standards to support the implementation of ethical AI, including educational guides, training programs, and potentially certification
  1. Collaboration to promote and incentivise partnerships between industry and academic to support the development of ethical AI by design and promote diversity in AI development
  1. Mechanisms for monitoring and improvement of AI systems for accuracy, fairness, and sustainability
  1. Recourse mechanisms to support appeal processes when an algorithm has a negative impact
  1. Consultation with the public and specialists to ensure stakeholder views are represented

The Australian government has yet to codify these principles and tools into regulatory or legal requirements.

Australia’s AI Action Plan

Published in June 2021 and now archived, Australia’s AI Action Plan sets out the Australian Government’s vision to position Australia as a global leader in secure, trusted, and responsible AI. In particular, the action plan proposes a combination of new and existing initiatives to achieve this, including direct AI measures, programs and incentives to drive technological growth, and foundational policies to support businesses, innovation, and the economy. The plan envisions to do this through four key focus areas:

  1. Developing and adopting AI to transform businesses in Australia through job creation and productivity increases
  1. Creating an environment to attract AI talent to ensure businesses have access to the required expertise
  1. Using cutting edge AI to solve national challenges and ensure that all Australians have the opportunity to benefit from AI
  1. Making Australia a global leader in responsible and inclusive AI that reflects Australian values

For each of these focus areas, the plan outlines how each of the three initiatives – direct AI measures, programs and incentives, and foundational policies – can help to support these efforts.

Legal action against AI in Australia

With the AI Action Plan archived, it is unlikely that it will lead to any AI-specific regulation or legislation and instead will be a consideration of Australia’s Digital Economy Strategy. However, that is not to say that Australia is not taking responsible AI seriously and that existing laws cannot be applied to AI.

Indeed, the Australian Government itself has been held accountable for the failure of its automated debt recovery tool robodebt. In September 2019, Melbourne-based law firm Gordon Legal filed a class action lawsuit on behalf of clients who unjustifiably had government-provided payments taken away or reduced due to false accusations by the tool of Australian citizens underreporting their income between July 2015 and November 2019. The class action represents approximately 648,000 group members against the Commonwealth,

With settlement for the class action reached in September 2022, where the Australian government agreed to pay $112 million in compensation to around 400,000 eligible individuals, including legal costs. It has also repaid more than $751 million to citizens affected by debt collection initiated by the tool, as well as agreeing to drop repayment requests for $744 million in invalid debts that had been partially repaid and $258 million in invalid debts that had not been repaid at all. Overall, over $1.7 billion has been paid out to around 430,00 members.

Prioritise responsible AI

Although AI-specific laws and regulations have not yet been proposed in all jurisdictions, an increasing number of lawsuits, harms, and scandals highlight the importance of responsible AI to avoid harm, minimise liability, and avoid reputational damage. Schedule a demo find out how Holistic AI’s approach to AI Governance, Risk, and compliance can help you embrace AI with confidence.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call