🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Practical and Societal Dimensions of Explainable AI

Authored by
Kleyton da Costa
Machine Learning Researcher at Holistic AI
Published on
Mar 2, 2023
read time
0
min read
share this
Practical and Societal Dimensions of Explainable AI

One of the challenges associated with using artificial intelligence (AI) models is the negative correlation between the accuracy of the models and their ability to explain the generated results (explainability) intrinsically. This characteristic has led these models to be considered black boxes, meaning that the way the model generates the results cannot be explained in a way that is easily understandable by humans. The figure below illustrates this trade-off, where the higher the accuracy of the results, the less we know about how to explain these results in human-understandable terms.

Trade-off between explainability and accuracy
Figure 1: Trade-off between explainability and accuracy

In general terms, an explainable algorithm is one where the reasons for a decision can be questioned and explained in a way that makes sense to humans. This process is called explainability, and some common definitions are:

It is generally accepted that explainability is the ability to explain the decisions of an AI model, making the results humanly understandable.

As AI becomes more prevalent in various industries and social areas, it is crucial that all stakeholders are equipped to comprehend and articulate the outcomes produced by AI models. This understanding process must be clear and transparent at different dimensions to ensure that the results generated are ethical, unbiased, and trustworthy. In this way, the field of explainable AI has been gaining more and more importance in recent years and is emerging as a research trend for the development of new solutions.

This article presents practical and societal dimensions in which the use of more transparent and secure AI models can be improved through Explainable AI. There is an intersection between the two dimensions, but the logic presented in the figure below is that there is a set of actors who create AI systems, and these systems impact another set of actors. In this sense, Explainable AI in the practical dimension is related to those who create AI systems and need to make these systems more transparent. The societal dimension is related to the agents who are impacted by AI systems and must act to make these systems transparent, such as through government regulation.

Relationship between practical and societal dimensions in explainable AI
Figure 2: Relationship between practical and societal dimensions in explainable AI

Practical dimension: The importance of explainable solutions

The practical dimension refers to the development of tools (methods, metrics, models, etc.) that aim to assist different stakeholders in understanding what AI models are considering to generate results. The main actors in this dimension are the professionals responsible for the application of models, such as data scientists and researchers.

Engineers, researchers, and data scientists must have the expertise to implement explainable solutions in their models. These solutions enable these professionals to understand how the AI algorithms work, what data they use, and how the models arrive at their predictions. By implementing such solutions, engineers and data scientists can also detect and rectify any potential biases that may exist in the data or the algorithms. Some common explainable AI solutions are LIME, SHAP, permutation feature importance, and partial dependence plot. For example, we can use permutation feature importance or SHAP to determine which variable has the greatest impact on the pricing of a house (number of bedrooms, location, size, etc.).

End-users, who are often the recipients of the AI-generated results, must be able to comprehend and analyse the outcomes to make informed decisions. For instance, individuals applying for a loan must be able to understand how the AI model arrived at their credit score, which can determine the likelihood of getting approved or rejected for a loan. These and other examples show that AI outcomes need to be clear and make sense to human comprehension.

Societal Dimension: From consumer confidence to regulatory demands

It is a crucial safeguard that explanations are provided to all relevant stakeholders, including individuals directly affected by the decision, as well as regulatory bodies, auditors, and other oversight groups. This can help ensure that AI systems' decisions are fair and ethical.

The AI-generated outcomes must be justified and explainable, especially in cases where the outcomes have a significant impact on the society at large. Similarly, companies using AI must be able to explain to their customers how the AI model arrived at a particular decision, such as a product recommendation, recruitment, or credit limit. This is important either to increase consumer/user confidence or to attend regulatory demands. For example, in a recently regulatory document the Dutch government wants more transparency about the AI systems deployed in the public sector.

There is a broader societal dimension to the use of explainable AI, which relates to the larger cultural implications of relying on AI systems to make decisions. As AI systems become more advanced and more integrated into our lives, like ChatGPT and other large language models, it is important to consider what role they will play in shaping our society and culture. This involves considering questions such as:

  • who has control over the development and deployment of AI systems?
  • how will decisions made by AI systems be governed and regulated?
  • what impact will the use of AI systems have on our overall social and cultural values?

Overall, the societal dimension is a crucial aspect of the development and use of explainable AI. By considering the ethical, social, and cultural implications of AI systems, we can work to ensure that they are developed and deployed in ways that are fair, transparent, and beneficial for society.

Conclusion

As the use of AI continues to expand, it is essential that stakeholders are empowered to understand and interpret the results generated by AI models. By fostering transparency and explainability at all levels, we can ensure that AI is used in a responsible, ethical, and unbiased manner, promoting public trust and confidence in these technologies.

  • Artificial intelligence models need to be increasingly transparent.
  • Companies need to comply with regulatory issues surrounding AI.
  • Creators of AI systems need to make their models more transparent and understandable for those impacted by the results.
  • Creators of AI systems can also be impacted by these systems. For example, governments can act as creators by using AI models for facial recognition of citizens and can be impacted by misinformation generated by AI models (such as deepfakes or information manipulation).

In this article, we present two dimensions in which Explainable AI plays an important role. The practical dimension describes how researchers and data scientists need to pay attention to the construction of more transparent systems through methods, metrics, and models of Explainable AI. On the other hand, the use of these models also needs to be clear to end-users in society. Given that AI-based models and systems are becoming increasingly prevalent, it is crucial to consider their potential impacts on people's daily lives. In order to ensure that these impacts are positive, it is important to prioritize transparency and make sure that the inner workings of these models and systems are as clear and understandable as possible.

Finally, explainability is a critical aspect of AI risk management, as it enables auditors to understand how AI models arrive at their decisions and whether these decisions are ethical, transparent, and free from bias.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo