Cutting through the static around AI


Six months ago, Salesforce’s Peter Coffee asked and made a real effort to answer the question “Why AI now?”. Flash forward to today and the question might be “What AI next?”.

Peter Coffee
Peter Coffee

On this site about six months ago, I asked and made a real effort to answer the question “Why AI now?” The question was not rhetorical. We’re seeing, right now, just the beginning of an enormous surge in pervasive application (with compelling financial returns) of machine intelligence and autonomy – but it’s also clear that there are caveats worth noting and addressing.

Two weeks ago in Sydney, Australia, I therefore welcomed the opportunity to convene a panel around what might be expressed as the question “What AI next?”.

Meeting at the Salesforce World Tour event in that city, our group explored a number of issues including impact on the labor force; differences between pilot projects and sustainable enterprise-scale applications; and gaps (on several dimensions) between naïve hype and the hard realities of engineering and governance.

Looking over my notes of the input provided in advance by the panel members, it seemed as if a common theme was the difference between static and dynamic analysis. Let me take a moment to explain my use of those terms, before we talk about their relevance to the AI conversation. Static analysis would take, for example, the number of miles that people are driving now, divide by average vehicle miles per gallon, multiply by a proposed increase in petrol tax, and call that a revenue estimate. If resulting higher petrol-pump prices induce people to buy more efficient vehicles, or maintain their current vehicles better to improve their efficiency, or share rides or use mass transit, reality will fall short of that expectation. Including those and other behavior changes in the estimate would be dynamic analysis.

Even though obviously more useful, dynamic analysis is often neglected, because it’s an exercise in drawing and animating supply and demand curves – each of which must bring together the behavior of many different parts of the economy. This concept is routinely taught to economics students as if it is straightforwardly precise, with little warning of how vaguely such curves may be defined in real life.

For example, when prices fall, we usually assume that more of something will be consumed than at the previous price – but some things, at the margin, have almost no value once you have more than enough. At some point, you might even have to be paid to accept the burden of storing or disposing of something more that you do not need and cannot use: if the price of milk fell to a penny a gallon, would you actually buy much more of it?

There are even so-called “Veblen goods” (named for the economist who first identified “conspicuous consumption” as a thing), for which demand might actually go down if something once scarce is perceived as becoming too affordable and therefore too common. The typical economics textbook does not show demand curves with wiggles and cusps and reversals, but real-world markets exhibit these behaviors.

In general, “we cannot guess at all accurately how much of anything people would buy at prices very different from those which they are accustomed to pay for it” – so wrote George Marshall in 1890, which is quite an admission from the definer of our concepts of supply and demand. A data-rich world has only recently given us the chance to put hard numbers on some of these hand-waving arguments, for example in calculating the “consumer surplus” being left on the table despite Uber’s vigorous use of surge pricing. And yet, here we are, embarked upon a journey of enormous changes in the prices we’ll pay, for immensely valuable things that are about to whipsaw from scarcity to abundance.

I am speaking of human knowledge, insight, and skill – where it’s notable that human talent, rather than scarce material goods, may have come to be “the new ‘conspicuous consumption.’” The old wealthy (in a material sense) were once identifiable by their abundance of leisure, but it’s been lately suggested that the new wealthy (in an intellectual sense) are characterized by being in demand and therefore being workaholically busy.

Is this going to seem, looking back from 2027 or so, like a quaint 20-teens transient thing? Will we say, “Remember when people with brains and experience had to do all the work themselves, instead of managing a team of AIs that handled the routine situations – so smart people could look for new trends and devote their time to the interesting exceptions?”

I’m not just talking about economic or professional elites. Call center operators can be front-ended by chatbots that ask routine questions, patiently and in a consistent manner that uses provably effective language to minimize confusion and error. When answers to the routine questions indicate a rare combination of circumstances, requiring additional special-case questions to be asked, algorithms will never fail to notice that need – and when a human operator is then brought into the situation, there will be far less likelihood that something critical has been missed.

Even in far more demanding tasks, such as radiological diagnosis of lung cancer, there is already evidence that we can look forward to a very near future in which the machines simply miss fewer things than people do. Compared to a panel of four top-tier radiologists, an AI tool in a comparison made in 2015 had a “false negative” rate (overlooking something that was there) of zero. It missed none of the cases. The humans missed 7%. Lest you suspect that the algorithm was simply being overly conservative, its “false positive” rate was 47% compared to the humans’ 66%.

Lawyers, as well as doctors, must expect that they will no longer be paid a generous wage to do a worse job than an algorithm. At JPMorgan Chase & Co., software is performing contract review tasks more quickly and with lower error rates than the company’s human wave of lawyers and loan officers, who used to spend about 180 full-time-equivalent hours each year on such things.

Complementary technologies such as augmented reality will put the consistently good advice of tomorrow’s machine assistance into a context where people can quickly apply it to improve their work performance. For example, technicians assisted by expert guidance through an augmented-reality headset showed 34% performance improvement on their first use of the technique.

I may seem to be painting a picture of epic job destruction, but that would be static analysis. Introducing bulldozers onto construction sites reduced the need for men with shovels, but created need for mechanics and opened the door to new roles in landscape design and the construction of buildings unimaginable in a world of manual labor. Likewise, introducing AI into the workplace creates the need for people to manage the AIs’ continued learning – tomorrow’s equivalent of bulldozer mechanics? It also makes more people employable to analyze and investigate patterns in data, as observed by one of my Sydney panelists – Natalie Nguyen, founder of Hyper Anna, who in pre-event communications made this observation:

If today, you are getting your reports or insights crafted by a human analyst, our bet is that somewhere in the next 5 years, 90% of those insights and golden nuggets are going to be created and delivered personally to you by computers or robots, preferably Anna. That didn’t mean that 90% of data science jobs would go. It means that the data scientists can extend their reach. And as a consequence, the world of data science will expand.

That would be dynamic analysis. It’s harder work, but it’s a considerably more accurate way to think about the future.

Our Sydney panel also considered several other caveats surrounding what I believe will be a rapid adoption, not a “third AI winter,” of these techniques. I look forward to sharing some of those considerations in future contributions here.

Image credit - Schnaider