One of the challenges associated with using artificial intelligence (AI) models is the negative correlation between the accuracy of the models and their ability to explain the generated results (explainability) intrinsically. This characteristic has led these models to be considered black boxes, meaning that the way the model generates the results cannot be explained in a way that is easily understandable by humans. The figure below illustrates this trade-off, where the higher the accuracy of the results, the less we know about how to explain these results in human-understandable terms.
In general terms, an explainable algorithm is one where the reasons for a decision can be questioned and explained in a way that makes sense to humans. This process is called explainability, and some common definitions are:
It is generally accepted that explainability is the ability to explain the decisions of an AI model, making the results humanly understandable.
As AI becomes more prevalent in various industries and social areas, it is crucial that all stakeholders are equipped to comprehend and articulate the outcomes produced by AI models. This understanding process must be clear and transparent at different dimensions to ensure that the results generated are ethical, unbiased, and trustworthy. In this way, the field of explainable AI has been gaining more and more importance in recent years and is emerging as a research trend for the development of new solutions.
This article presents practical and societal dimensions in which the use of more transparent and secure AI models can be improved through Explainable AI. There is an intersection between the two dimensions, but the logic presented in the figure below is that there is a set of actors who create AI systems, and these systems impact another set of actors. In this sense, Explainable AI in the practical dimension is related to those who create AI systems and need to make these systems more transparent. The societal dimension is related to the agents who are impacted by AI systems and must act to make these systems transparent, such as through government regulation.
The practical dimension refers to the development of tools (methods, metrics, models, etc.) that aim to assist different stakeholders in understanding what AI models are considering to generate results. The main actors in this dimension are the professionals responsible for the application of models, such as data scientists and researchers.
Engineers, researchers, and data scientists must have the expertise to implement explainable solutions in their models. These solutions enable these professionals to understand how the AI algorithms work, what data they use, and how the models arrive at their predictions. By implementing such solutions, engineers and data scientists can also detect and rectify any potential biases that may exist in the data or the algorithms. Some common explainable AI solutions are LIME, SHAP, permutation feature importance, and partial dependence plot. For example, we can use permutation feature importance or SHAP to determine which variable has the greatest impact on the pricing of a house (number of bedrooms, location, size, etc.).
End-users, who are often the recipients of the AI-generated results, must be able to comprehend and analyse the outcomes to make informed decisions. For instance, individuals applying for a loan must be able to understand how the AI model arrived at their credit score, which can determine the likelihood of getting approved or rejected for a loan. These and other examples show that AI outcomes need to be clear and make sense to human comprehension.
It is a crucial safeguard that explanations are provided to all relevant stakeholders, including individuals directly affected by the decision, as well as regulatory bodies, auditors, and other oversight groups. This can help ensure that AI systems' decisions are fair and ethical.
The AI-generated outcomes must be justified and explainable, especially in cases where the outcomes have a significant impact on the society at large. Similarly, companies using AI must be able to explain to their customers how the AI model arrived at a particular decision, such as a product recommendation, recruitment, or credit limit. This is important either to increase consumer/user confidence or to attend regulatory demands. For example, in a recently regulatory document the Dutch government wants more transparency about the AI systems deployed in the public sector.
There is a broader societal dimension to the use of explainable AI, which relates to the larger cultural implications of relying on AI systems to make decisions. As AI systems become more advanced and more integrated into our lives, like ChatGPT and other large language models, it is important to consider what role they will play in shaping our society and culture. This involves considering questions such as:
Overall, the societal dimension is a crucial aspect of the development and use of explainable AI. By considering the ethical, social, and cultural implications of AI systems, we can work to ensure that they are developed and deployed in ways that are fair, transparent, and beneficial for society.
As the use of AI continues to expand, it is essential that stakeholders are empowered to understand and interpret the results generated by AI models. By fostering transparency and explainability at all levels, we can ensure that AI is used in a responsible, ethical, and unbiased manner, promoting public trust and confidence in these technologies.
In this article, we present two dimensions in which Explainable AI plays an important role. The practical dimension describes how researchers and data scientists need to pay attention to the construction of more transparent systems through methods, metrics, and models of Explainable AI. On the other hand, the use of these models also needs to be clear to end-users in society. Given that AI-based models and systems are becoming increasingly prevalent, it is crucial to consider their potential impacts on people's daily lives. In order to ensure that these impacts are positive, it is important to prioritize transparency and make sure that the inner workings of these models and systems are as clear and understandable as possible.
Finally, explainability is a critical aspect of AI risk management, as it enables auditors to understand how AI models arrive at their decisions and whether these decisions are ethical, transparent, and free from bias.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts