🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

What is AI Transparency?

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Feb 6, 2023
read time
0
min read
share this
What is AI Transparency?

Artificial intelligence (AI) is being adopted and integrated into our daily lives, from voice recognition tools to recommendations on streaming services, and is increasingly being applied in high-stakes contexts such as recruitment and insurance. As the use of these technologies proliferates, it is important that there is transparency around the data the systems use to generate outputs and that the decisions made are explainable and their implications communicated to relevant stakeholders. In this blog post, we examine what is meant by AI transparency and explainable AI and outline how they can be implemented through both technical approaches and governance approaches.

The definition of AI transparency

Artificial intelligence (AI) is a broad term that describes algorithmic systems that are programmed to achieve human-defined objectives. The outputs of these systems can include content such as images or text, predictions, recommendations, or decisions, and they can be used to support or replace human decision-making and activities. Many of these systems are known as “black box” systems, where the internal workings of the model are either not known by the user or are not interpretable by humans. In such a case, the model can be said to lack transparency.

AI transparency is an umbrella term that encompasses concepts such as explainable AI (XAI) and interpretability and is a key concern within the field of AI ethics (and other related fields such as trustworthy AI and responsible AI). Broadly, it comprises three levels:

  • Explainability of the technical components – how explainable the internal mechanics of the algorithm are
  • Governance of the system – whether there are appropriate and adequate processes and documentation of key decisions
  • Transparency of impact – whether the capabilities and purpose of the algorithms are openly and clearly communicated to relevant stakeholders
The definition of AI transparency

Explainability of the technical components

Explainability of the technical components of the system refers to being able to explain what is happening within an AI system, and is based on four types of explanations: model-specific, agnostic, global, and local.

  • Model-specific explainability – a model has explainability built into its design and development
  • Model-agnostic explainability – a mathematical technique is applied to the outputs of any algorithm to provide an interpretation of the decision drivers of the model
  • Global-level explainability – understanding the algorithm’s behavior at a high/dataset/populational level, something that is typically done by researchers and designers of the algorithm
  • Local-level explainability – understanding the algorithm’s behavior at a low/subset/individual level, typically those being targeted by an algorithm
Explainability of the technical components

Governance of the system

The second level of transparency, governance,  includes establishing and implementing protocols for documenting decisions made about a system from the early stages of development to deployment, and for any updates made to the system.

Governance can also include establishing accountability for the outputs of a system and including this within any relevant contracts of documentation. For example, contracts should specify whether liability for any harm or losses is with the supplier or vendor of a system, the entity deploying a system, or the specific designers and developers of the system. Not only does this encourage greater due diligence if a particular party can be held liable for a system, but it can also be used for insurance purposes and to recover any losses that result from the deployment or use of the system.

Outside of documentation and accountability, governance of a system can also refer to the regulation and legislation that govern the use of the system and internal policies within organizations in terms of the creation, procurement, and use of AI systems.

Transparency of impact

The third level of transparency concerns communicating the capabilities and purpose of an AI system to relevant stakeholders, both those who are directly and indirectly affected. Communications should be issued within a timely manner and should be clear, accurate, and conspicuous.

To make the impact of systems more transparent, information about the type of data points that the algorithm will use, and the source of the data should be communicated to those affected. Communications should also indicate to users that they are interacting with an AI system, what form the outputs of the system take, and how the outputs will be used. Particularly when a system is found to be biased towards particular groups, information should also be communicated about how the system performs for particular categories and whether particular groups might experience negative outcomes if they interact with the system.

Why do we need AI transparency?

A major motivation for AI transparency and explainability is that they can build trust in AI systems, giving users and other stakeholders greater confidence that the system is being used appropriately. Knowing the decisions, a system makes and how it makes them can also give individuals more agency over their decisions, allowing them to give informed consent when interacting with a system.

As well as this, transparency can also have several business benefits. Firstly, by cataloguing all of the systems being used across a business, steps can be taken to ensure that algorithms are being deployed efficiently and that simple processes are not overcomplicated by using complex algorithms to do minor tasks.

Secondly, if legal action is brought against an organization, transparency in their AI systems facilitates a clear explanation of how their system works and why it may have come to certain decisions. This can help to absolve organizations from accusations of negligence or malicious intent arising from the negative application of an automated system, resolve the issue quickly, and ensure that appropriate action can be taken when necessary. An applied example of this is the action that was brought against Apple for their Apple Card, which reportedly gave a much higher credit limit to a man compared to his wife, despite her having a higher credit score. However, Goldman Sachs, the provider of the card, was able to justify why the model came to the decision that it did, meaning that they were cleared of illegal activity, highlighting the importance of explainable AI.

Ultimately, the overarching goal of AI transparency is to establish an ecosystem of trust around the use of AI, particularly among citizens or users of systems, and especially in communities that are at the most risk of harm by AI systems.

Get in touch with us at we@holisticai.com to find out how we can help you increase transparency and build trust in your AI systems.‍

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo