The problem of AI explainability - can we overcome it?
- Summary:
- Explainability is not just a roadblock to AI adoption - it also has implications for public health and safety. This is how the tensions between transparency, accuracy and performance are coming to a head.
Do you remember when your mother told you not to eat with your hands? You’d say, “Why?” And she’d say, “Because I said so!”
That, in a nutshell, is the explainability issue in AI. Of course, out of respect (and maybe fear) you didn’t question your mother’s authority. But is “Because I said so!” an adequate answer from AI models that make crucial decisions in an opaque way, without explaining the rationale followed - especially in areas where we do not want to completely delegate decisions?
When autopilot controls, coupled with telemetry and other kinds of sensors, cause an aircraft to fly into terrain, or in the case of the Boeing 737MAX, auguring into the ground, questions are raised. When it comes to inscrutable algorithms whose path to conclusions are so opaque they can lead to catastrophe or, more likely, millions of small bad decisions that add up, what’s to be done? As much as an ethical imperative, it is also a requirement of accountability, safety and liability.
Machine Learning (ML) models have a fairly simple job to do; they just do it very fast and repeatedly. The most complicated thing they do is matrix algebra to support either regression or clustering. In the former case, they look at a mountain of data and attempt to find a trend, or a predictor. In clustering, they try to find things that belong together. Most often, the technique is maximizing or minimizing a “cost” function, such as “next best option,” for example.
But there is more to AI than straightforward ML. Neural Networks and their extrapolation, Deep Learning, are particularly difficult to understand, and they power applications that affect people in dramatic ways, such as facial recognition and Natural Language Processing (Alexa listening to you). On the other hand, Bayes nets are far more explainable, but not as frequently deployed because they only speak on probabilities, and few know how to manage from probabilities.
Explainabilty has a few different elements:
Is the data good? Today, it is impossible to guarantee that the data used by AI is pristine. Every modeler knows this. The partial solution to this is to “get an eye on it,” as Joe Hellerstein, Co-Founder and CSO of Trifacta, said to me recently. In other words, using ordinary data profiling, aided by automated tools to check for inconsistencies, data drift, missing data - all the things we’ve been doing for years. But due to the volume, speed and complexity, AI-driven tools are needed to add some quality to the data.
In designing a machine learning model, there must be an explanation of its logic, documented in a way that can be read in prose, devoid of equations, with a checklist of possible bias issues and vetted with designated persons.
Close evaluation of the biases learned by the model, allowing AI developers and other stakeholders to understand and validate its decision rationale, is necessary.
Some questions on explainability to ponder:
- Are the model assumptions made by the engineer logical?
- Has the engineer selected the correct features to fit the problem?
- Are the steps the model takes what is expected?
- From all the above, can the output be verified?
There is a vigorous dispute about transparency versus accuracy
Some feel that burdening AI development with all of this transparency will not only slow down the process, but dumb it down. Freedom Labs has a position on this:
Ultimately, it’s quite unlikely that we’ll settle for suboptimal A.I. systems in favor of transparency; too much potential value would go lost and A.I. is of too great strategic importance for governments to enforce overly strict regulations.
That part I agree with; I have no doubt that this is true. They continue:
Also, efforts are ongoing to develop deep learning applications that can explain themselves better (e.g. through visualizations of their reasoning steps) and, in doing so, make themselves more trustworthy.
This is hopeful, or maybe, aspirational. And:
Alternatively, despite the risks involved, we may have to do with a trial-and-error approach to learn how to engage with deep learning systems. If so, in an extreme scenario, the catastrophic failure of an A.I. system could result in a "Hindenburg moment” that would set A.I. back for years.
My take
A technology-first, and "We'll figure it out later," is generally a bad idea. Consider nuclear power. The Limerick Power station was built less than three miles from my (late) parents house. It has operated flawlessly for forty-five and it’s license was just renewed. But in the process it created a mountain of highly radioactive spent fuel rods with nowhere to go.
In a future article, I’ll describe a number of approaches to deal with the explainability problem. There are some promising ideas coming from academia and research labs. For example, the most innovative is the Local-to-Global (L2G) Step. At each iteration, L2G merges the two closest explanations e1, e2 by using a notion of similarity, defined as the normalized intersection of the coverages of e1, e2 on a given record set X. I promise to explain this in English.