AI explainability and interpretability - we have a long way to go
- Summary:
- AI explainability remains an important preoccupation - enough so to earn the shiny acronym of XAI. There are notable developments in AI explainability and interpretability to assess. How much progress have we made?
For AI to reach its potentional and to be widely deployed in many fields, especially those that affect people, it needs to overcome the black box problem. Today, a hot area of research is called eXplainable AI (XAI), to enhance AI learning models with explainability, fairness accountability, and transparency.
The premise is that once AI is thoroughly endowed with features, it will lead to the kind of "Responsible AI" envisioned by the industry.
Most ML models aren't very smart. They are "trained" to recognize patterns and relationships, but their results can sometimes be bizarre. As they go through their iterations, they attempt to relate various parameters to an objective function ("Is it a cat or a dog?").
Emerging research in XAI (eXplainable AI)
Yes, complexities and adversarial perturbations in the data can lead to wrong predictions or decisions in a rules-based system as well. But in that case, it is possible to follow a trace of how the rules fired. There are too many calculations for that to be feasible in machine leering. XAI's goal is to provide the posterior analysis of a decision and provide methods to remove bias from the data and other tasks.
In What exactly is meant by Explainability and interpretability in AI, Meet Ghandi sums it up nicely:
XAI assists in ensuring that the training dataset, as well as the trained model, is devoid of any bias while taking decisions. Besides, XAI would help in debugging of learning models as well as draw attention to the various adversarial perturbations which would lead to wrong predictions or decisions. More importantly, XAI would give an insight into the causality established by the learning model as well as the reasoning of the model. One thing that should be noted is that by making the learning models more complex, its interpretability decreases and performance increases; hence there is an inverse relationship between performance and interpretability of a learning model.
While coming up with XAI techniques, one should focus on the targeted users of the learning system to make the learning systems trustworthy to its users. Also, user's privacy should be taken into consideration. Hence to use AI in real-life applications, first, we need to make AI accountable by explaining its decisions and make it transparent forming the building blocks for Responsible or Ethical AI.
Ghandi raises a few essential issues:
- There is a hot debate in AI development that XAI will degrade the performance of the system.
- Causality is not a criterion I've seen in other XAI explanations, so I assume this is a personal opinion.
- Trustworthy, privacy and accountable to Responsible/Ethical AI is aspirational, but at the moment, XAI is focused on the developers and owners of the systems.
What are some of the innovations?
The difference between explainability and interpretability is a little fuzzy, but there are some more crisp definitions:
Explainability means feature values are related to its model prediction in a way that humans can understand.
Important properties of explainability:
- Portability: It defines the range of models where the explanation method can be applied
- Expressive power: Strength of an explanation that a method can create
- Translucency: how much the method of explanation depends on the specific machine learning model. Low translucency methods tend to have higher portability.
- Algorithmic complexity: It defines the computational complexity of a method where the explanations are generated.
- Fidelity: High fidelity explains important properties, and low fidelity does not.
Interpretability - It is easier to know the reason behind individual decisions or predictions if the interpretability of a machine learning model is higher.
Evaluation of interpretability:
- Application-level evaluation - It means putting the explanation into the product, and the end-user will do all the tests.
- Human-level evaluation - The experiments are carried out by laypersons, making the experiments cheaper, and testers can be found easily.
XAI and bias and fairness
In a paper, Explainable AI in Practice Falls Short of Transparency Goals, the authors, Umang Bhatt et al, make the proposition that:
PAI's research reveals a gap between how machine learning explainability techniques are being deployed and the goal of transparency for end-users.
They begin by pointing out that:
[AI] systems could have profound implications for society and the economy, potentially improving human/AI collaboration for sensitive and high impact deployments in medicine, finance, the legal system, autonomous vehicles, or defense.
Unfortunately, they find that deployed explainability techniques are far from adequate. Echoing my thoughts, they conclude: "
PAI has found that in its current state, XAI best serves as an internal resource for engineers and developers, who use explainability to identify and reconcile errors in their model, rather than for providing explanations to end-users. As a result, there is a gap between explainability in practice and the goal of transparency, since current explanations primarily serve internal audiences, rather than external ones
DARPA's explainable AI program
DARPA (Defense Advanced Research Projects Agency) is developing several projects for explainability, mostly defense-oriented. Though historically, many DARPA projects have found their way into civilian applications, most notably the Internet. Their focus is to have machine learning systems to explain their rationale, describe their strengths and weaknesses and convey an understanding of how they will behave in the future. Part of the program is converting explainability models into understandable dialogues for the end-user.
The DARPA Explainable AI (XAI) program aims to create a suite of machine learning techniques that:
- Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
- Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
The issue of performance is high on their list.
My take
Explainability and the black box problem will only be resolved in small bits until some new technology - a third wave of AI - shows. Most progress in the near term will be devised for the developers of AI, not end-users.