🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

Enhancing Transparency in AI: Explainability Metrics for Machine Learning Predictions

Authored by
Cristian Munoz
Machine Learning Researcher at Holistic AI
Kleyton da Costa
Machine Learning Researcher at Holistic AI
Franklin Cardenoso Fernandez
Researcher at Holistic AI
Bernardo Modenesi
Data Science & AI Research Fellow at Michigan Institute for Data Science (MIDAS)
Adriano Koshiyama
Co-founder & Co-CEO at Holistic AI
Published on
Feb 21, 2024
read time
0
min read
share this
Enhancing Transparency in AI: Explainability Metrics for Machine Learning Predictions

Artificial Intelligence (AI) has made significant strides in predictive performance, yet it often operates like a “black box” in which its inner workings and the rationale behind its outputs are a mystery.

Explainable Artificial Intelligence (XAI) encompasses techniques and strategies to remedy this. XAI pushes teams to focus not only on evaluating the performance of explainability methods but on scrutinizing the explanations themselves. Ideally this focus leads to more reliably performant and trustworthy systems.

“Metrics are critical in achieving transparent and trustworthy outcomes for machine learning models.”

In this guide, we’ll highlight important considerations in XAI, as well as present some novel and cutting-edge metrics we’ve developed in our own research. Read on to gain first-hand knowledge of how your team can approach XAI in a systematic and scientific way.

The Essence of XAI: Transparency and Understanding

XAI aims to unravel the mysteries of AI models by making their prediction mechanisms understandable to humans. Transparency of AI models is useful for a host of reasons, including:

  • Improvement of decision-making processes
  • Debugging unexpected behavior
  • Refining data mining processes
  • Ensuring fairness
  • Minimizing risk
  • And presenting predictions to stakeholders

The objectives of XAI, as outlined in a comprehensive study, encompass empowering individuals to make informed decisions, enhancing decision-making processes, identifying and addressing vulnerabilities, and fostering user confidence in AI systems. These goals align with the broader mission of making AI a trustworthy and accountable technology. Both outcomes are aligned with wider organizational goals including increased productivity and minimization of risks.

The Challenge of Metrics in XAI

Metrics are critical in achieving transparent and trustworthy outcomes for machine learning models. However, in XAI, defining metrics is a complex task due to the absence of a ground truth for explainability.

One way to build scaffolding and work your way back to ground truth is to categorize metrics into subjective, objective, and computational groups, each with varying degrees of human intervention.

These metrics provide a structured approach to evaluating explainability methods and contribute to a more nuanced understanding of model behavior.

Novel Computational Metrics for XAI

In our recent paper “Evaluating Explainability for Machine Learning Predictions using Model-Agnostic Metrics” we explore a set of novel metrics useful for explaining a variety of AI model types.

Unlike traditional studies focusing on method performance, this work places a spotlight on evaluating the explanations themselves. These metrics delve into crucial dimensions, including the concentration of feature importance, surrogate interpretation feasibility, variation in feature importance across categories, and the stability of feature importance across data or feature spectra.

We can summarize the rationale behind our choice of metrics in the next figure. Machine learning models generate outputs based on features and we can attribute some feature importance value (explanation) for each feature. The explainability metrics have the main goal of answering some core questions about each of these explanations.

Novel Computational Metrics for XAI

Below we'll briefly introduce our explainability metrics. But note that if you want to explore the metrics yourself, you can do so by jumping into the open source Holistic AI Library, a toolkit for trustworthy AI in machine learning.

Feature importance spread

The Feature Importance Spread measures the concentration of feature importance. High concentrations facilitate the interpretability of a group of predictions as it requires prioritizing fewer features. The feature importance spread ratio has the results from 0 to 1.

Example of a specific model’s outputs

  • Question: How concentrated is the feature importance for a group of predictions?
  • Value: 0.95
  • Short Interpretation: The high Feature Importance Spread Ratio of 0.95 indicates that the feature importance is spread out, making it challenging to interpret a group of predictions as it requires considering a larger set of features.

Feature importance stability

The Feature Importance Stability measures the variation of importance in the feature space. A low stability score indicates that the importance is similar throughout the input domain. A high score indicates a contrast in importance, and global indicators may not be the best scores. We have two types of stability metrics: data stability and feature stability. The result’s range is from 0 to 1.

Example of model’s outputs

  • Question: How heterogeneous is the importance of features across the input domain?
  • Comparison between models: the next figure shows a comparison of the feature importance of three models trained for the same task: logistic regression (LR), random forest (RF), and gradient boosting (GB). For this experiment, we can define that RF has the less heterogeneous feature importance spread. Comparing GB and RF, we can observe that small changes in distribution are well captured by this metric.

Predictions Groups Contrast

The Feature Importance Contrast quantifies the disparity between features used to explain a group of predictions compared to the overall average attributed to the whole model. If a group has a high disparity, it is important to analyze it.

Example of a specific model’s outputs

  • Question: How much do the features explaining a group of predictions differ from the overall model average?
Predictions Groups Contrast

Alpha-Feature Importance

The alpha Feature Importance metric quantifies the minimum proportion of features required to represent alpha of the total importance. In other words, this metric is focused on obtaining the minimum number of features necessary to obtain no less than alpha x 100% of the total explanation based on the normalized feature importance vector.

Example of a specific model’s outputs

  • Question: What is the minimum proportion of features needed to explain a certain percentage (alpha) of the total importance?
Alpha-Feature Importance

In the plot above we can observe that features above the red line represent 80% of the total importance. The scores reveal that GB concentrates crucial information for input classification and enables us good performance utilizing fewer features in comparison to Logistic Regression (LR) or Random Forest (RF).

Explainability Ease Score

The Explainability Ease Score quantifies the average complexity of curves that describe the dependence between the prediction and the input feature. A high level indicates high complexity in the dependence of the variables.

Example of a specific model’s outputs

  • Question: How complex are the curves describing the relationship between predictions and input features?

Surrogacy Efficacy Score

The Surrogacy Efficacy Score quantifies the veracity of a hypothesis: the prediction of the model can be explained by simple rules.

Example of a specific model’s outputs

  • Question: How well can the model’s predictions be explained by simple rules?
  • Value: 0.92
  • Short Interpretation: A Surrogacy Efficacy Score of 0.92 indicates that 92% of the model’s predictions can be effectively explained by simple rules, suggesting a good level of interpretability and rule-based understanding.

Getting started with XAI Metrics with the Holistic AI Library

This post introduced a novel set of explainability metrics, focusing on both global and local feature importance. These metrics serve as a compass, navigating the terrain of model interpretability, and providing a concise summary of a model’s ease of explanation. This contrasts with the conventional approach of analyzing many graphical observations to decipher feature importance, offering a more streamlined, efficient, and scientific method.

It is crucial to acknowledge that the effectiveness of these metrics hinges on the nature of the characteristics from which importance is derived. Recognizing this relative nature underscores the need for nuanced interpretation tailored to the intricacies of each model.

Interested in exploring explainable AI further, reach out for a demo of our AI governance platform, read our paper on explainability metrics for AI, or start exploring metrics on your own data using the Holistic AI Library.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo