Data drivers need insightful navigators

SUMMARY:

Don’t be data-driven off the cliff! Salesforce’s Peter Coffee navigates through the abundance of data encountered today.

Peter Coffee

Abundance of data does not, necessarily, make us better informed – and being “data-driven” is not the unmixed blessing, or badge of proud distinction, that it’s too often perceived or claimed to be. These are caveats that we do well to consider early on, rather than ruefully recognizing them only after an overly credulous “digital transformation” has led to costly lessons learned.

In one case, for example, petrol distribution to local fuelling stations was proudly made “data driven” by connecting sensors to report fuel levels in stations’ storage tanks. Rather than relying on station operators to check levels and request a resupply, this process was to be made automatic and therefore (presumably) more consistent and more cost-effective. Initially, however, the result was an increased number of delivery requests and a significant increase in distribution costs.

On reflection, the error was quickly recognized. At a busy station, a storage tank with only 20% of its capacity still on hand might be emptied in the next few days. At a quieter location, the same amount of fuel might last for a forthcoming week or two. The naïve use of a single datum, fuel on hand, was not enough to drive intelligent behavior.

Adding a second datum, the rate of change in fuel level over an appropriate period of recent time, substantially improved decisions as to when to dispatch a delivery – but even this failed to capture another crucial bit of experience, namely the occurrence of holidays that reliably occasion a spike in demand as holiday-makers rack up extra miles. Additional seasonal adjustments were therefore also needed. Only with this third variable properly introduced was the “data driven” process actually an improvement.

After a few experiences like the one just described, the next level of naïve reasoning might be “add variables until the fit is satisfactory” – but this way lies another problem. When data collection becomes sufficiently inexpensive, and the variety of data available becomes sufficiently broad, successful “hindcasts” with no predictive power are pretty much certain to arise.

The phenomenon of over-fitting is readily seen at the Spurious Correlations web site, where we see examples like the number of worldwide non-commercial space launches versus the number of USA university doctorates awarded in sociology in the same year. Nearly two-thirds of the variance in launch activity is apparently “explained” (as measured by the value of ‘R squared’) by the number of new USA sociologists joining the academy that year, over a period of more than ten years, but I trust that none of us believes that there is predictive power here.

There is an especially slippery slope, dropping us into pits of false belief, when many such relationships are examined in one large analysis. In one case, a deliberate effort to demonstrate fallibility of “science news” included perfectly legitimate studies of eighteen different variables (such as weight, blood protein, sleep quality and others) in a modest number of human subjects. “We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one ‘statistically significant’ result were pretty good,” said the actually Ph.D.’d prankster John Bohannon. As it turned out, millions of people in twenty countries were told that chocolate consumption accelerates weight loss. “You might as well read tea leaves as try to interpret our results,” Bohannon said.

AI on the rise

None of these warnings are novel to anyone who’s taken a statistics class, but we’re talking a lot these days about making advanced analytics and guided machine learning available to non-specialists. Finding value in the power of machine learning, when it’s fed by this century’s cornucopia of device and process data streams, is among the duties that employers expect to hand over to newly hired data scientists – who are surely under pressure to justify their high salaries.

Among the solutions will be the inclusion of understandable explanations for an artificial intelligence’s conclusions. American defense research labs are budgeting large sums for so-called ‘XAI’: ‘Explainable Artificial Intelligence.’ AI-based aids to business processes, such as Salesforce Einstein, are able to include some degree of explanation for their recommendations – a crucial spur to rapid adoption, and to useful reliance on these tools.

It will be vital to be able to ask a tool, “Over what range of input would you give this same recommendation?” – because, to quote one of my colleagues in a meeting with customers last week, “It is pointless to have granularity of recommendation that is more specific than granularity of action.” Further, if a machine learning process thinks that the human arm holding a dumbbell is actually part of the dumbbell, due to machine learning gone wrong, bad recommendations may arise. Asking the machine, “What do you think you’re recommending and why?” may be something we’ll all want to do.

It would be nice if democratization of access to data could make all of us smarter, but we don’t see enormous growth of supply in any other domain without significant damage to the quality of what’s offered – or to the judgment shown in how it’s used. Data can not, in general, be expected to be an exception to these nearly universal patterns. You can be “data-driven” right off the cliff.

Image credit - YouTube

    Leave a Reply

    Your email address will not be published. Required fields are marked *