The I in AI is dumb leading to incrementalism not transformation

Den Howlett Profile picture for user gonzodaddy May 20, 2018
A plethora of conflated terms - AI, ML, DL along with transformation are not helping decision-makers to chart a path to build more efficient and effective business models. Time to step back.

Robot hand in trust AI machine learning © zapp2photo
Those who take university education seriously come out the other side with two indispensable pieces of understanding. First, you learn to constantly ask: how does this argument stand up given the available facts? Second, you learn to present arguments in as concise a manner as possible. Neither of those two learnings applies to the insane amounts of verbiage on the general AI topic.

The last couple of years, I've observed that AI has been conflated with innovation and with transformation to the point where the impression is given that AI is as close to a magic bullet as we're going to find that will (magically) transform businesses of all kinds, government and the like. It falls short of solving for world peace though it would not surprise me if some bright PR spark found a way to spin AI in that direction.

On occasion, I have grumbled that AI, as currently iterated is woefully short of 'intelligence' and, to my mind, is only a few steps better than the decision tree 'stuff' I played with in the 1980s. This morning I note that Sky TV's Rob McLaughlin, Head of Digital Decisioning and Analytics cautions on the value that machine learning (not general AI) is delivering:

The truth is the AI system only slightly outperforms the human, rules-based approach in terms of recommendations right now. That’s interesting and leaves us with the feeling that there’s still a significant amount of head room for us to progress as we refine the data that we make available to the AI platform.

In his weekly roundup of commentary in the general field of AI, Derrick Harris (subscribe for free, it's worth it) says:

We might be at an inflection point in the field of artificial intelligence, where applied AI heads off in one direction and research goes in another. Not because the two areas are inherently distinct, but rather because there's public relations situation happening and something has to give. Research is conflated with products, products are conflated with stage demonstrations, and it seems like fewer and fewer people have a real grasp on what's what.

Got your attention? Read on. Harris expands his theme by explaining what is and is not happening in the real world as follows:

Certain deep learning techniques (CNNs, LSTMs, RNNs and, arguably, reinforcement learning) have become well understood and quite reliable. They're often the technologies behind mainstream applications of AI, including computer vision, natural language processing, speech recognition and other types of pattern matching.


...deep learning is limited in its application because training deep learning models often requires so much data and compute power. (OpenAI recently published a blog post showing what looks like a strong correlation between major advances and increased computing power, but there has been plenty of pushback on this premise.) It's also limited because what it's really good at is pattern-matching, which has many obvious benefits -- just look at some of the applications linked to below -- but falls short of anything resembling artificial general intelligence (aka the Holy Grail).

If you're considering some type of AI - most likely ML/DL then these two statements alone should give you pause for thought. There is more:

So, as the folks behind deep learning get famous (and sometimes argue over who actually deserves credit), there's a growing chorus from other corners of the AI research community saying, "Hold up! This stuff is useful, but let's please not conflate it with this world-changing AI that everyone seems to be talking about all the time. Any sort of actual intelligence would at least need to understand cause and effect, rather than just be able to identify faces or decipher the words I'm speaking."

And finally:

What happens when we focus so much attention on a term -- artificial intelligence -- is that we lose sight of what's right before our eyes...We can give deep learning its due without automatically taking the discussion into the realm of "what if ..." and complete economic transformation. As with most things in life, there's a gap between applied AI and research that's real, important, and worth acknowledging whenever we get tempted to ramp up the hyperbole.

Examples on offer in the enterprise world of ERP

Now check this against an ASUG article I came across entitled: SAP S/4HANA Puts the AI In ERP. Citing examples from a story posted by Sven Denecken, SVP, Head of Product Management, Co-Innovation SAP S/4HANA at SAP SE, author Adrian Bridgwater says:

When it comes to paying for your procured goods and services, machine learning can dramatically reduce the number of manual tasks that need to be carried out, particularly in cases where the goods and invoices receipts do not match. This exception handling process used to involve several mundane routine manual steps.

He then goes on to condense a series of scenarios that Denecken proposes:

  • Finance: Account reconciliation powered by machine learning leads to intelligent recommendations instead of manual resolutions. This will speed up the exception management process and help improve accuracy.
  • Sourcing and Procurement: Smart buying via a conversational user experience (natural language interaction) eases and accelerates requisitioning, which helps increase productivity.
  • Sales: Sales quotes can quickly become sales orders through a conversational user experience (natural language, voice, or text), giving salespeople more time to focus on what they do best.
  • Enterprise Portfolio and Project Management: AI-powered project cost forecasting helps reduce budget overruns and drives better project investment decisions. The conversational user experience powered by SAP CoPilot gives project managers a hands-free, interactive experience that allows them to stay on top of their game by sharing insights on the status of their projects anytime, anywhere.

Given what Harris says, I'm struggling here to get much beyond incremental value - assuming the technologies work (we don't know that yet) and can drive value (we don't know that either.) That's not necessarily a bad thing but you have to know the desired outcome to correctly position what happens next.

I give Denecken credit for keeping expectations modest and ensuring that readers understand that we're talking about ML and not AI (as too many writers including ourselves on occasion conflate.) But I am still head scratching as to how - for example - we leap from 'account reconciliation' to 'intelligent recommendations.' Or how 'conversational user experience' leads to 'smart buying' when there are already important questions being asked about the ethics of examples like Google Duplex.

What about ethics?

Regardless of outcomes, other danger lurk. Check this from Azeem Azhar, one of the most astute observers in the AI field (another excellent newsletter to which you absolutely should subscribe):

There are real problems with bots masquerading as people. We’ve seen clumsy bots on Twitter and Facebook confuse and befuddle people for close to a decade. I used to use Andrew/Amy, an impressive scheduling bot. But I stopped using it when I realised that humans who I cared about were spending their times crafting thoughtful messages to a script.


I’m not an unalloyed fan of the rabid anthropomorphisation of today’s AI tools. These are tools, spreadsheets, hammers, flints, with a bit more verve and fairy-lights. They are impressive. They use better maths to act less deterministically than the dumb cogs and wheels of the past. They can deliver us some quite remarkable benefits, as long-term readers of this missive will know. But today, they are tools. They are improved by the application of scientific method and good engineering. We use words like “training” because those processes are analogous to the way we train conscious, biological entities. But it is only an analogy.

Put in those terms, it is easy to see how even those who are usually upbeat about advances in the general field of AI are pointing to a reality check. Important voices are resetting expectations. In short, they are signaling incremental change rather than the transformation of which tech marketers are so fond.

Does that mean transformation is an illusion?

We clearly see how anything involving 'transformation' has important structural consequences that involve the application of the C-word. That's 'change.'

We also know that institutional change of any kind and in any organization is really, really hard except in one set of circumstances - the near-death corporate experience.

In my recent critique of an executive summary to an otherwise excellent report, I said:

It’s hard enough for finance departments to find and use non-accounting data of the type that D&B or Bloomberg provides let alone unstructured data the business is already generating. So how the heck anyone expects AI to be of use any time soon is a mystery.

I also argued that enlightened/progressive CFOs are already embedded in the LoB as helpers to those LoB leaders. I should have added, as Phil Fersht did in a comment over at LinkedIn:

Smart firms are creating broader finance roles where performance is aligned with achieving common business outcomes. The focus is shifting to a work culture where individuals are encouraged to spend more time interpreting data, understanding the needs of the front end of the business and ensuring the support functions keep pace with the front office. This is especially the case in industries more dependent than ever on real-time data, using multiple channels to reach customers, and being able to think out-of-the-box to get ahead of disruptive business models. Hence, finance execs are in a critical place to be more effective than ever.

That's one of the smart ways you bring change to an organization.

My take...and advice

The scattershot manner in which AI, ML, and DL are co-mingled is imprecise and confusing. If people need to mention these buzzwords du jour then please, can we have some precision of meaning? It matters - and yes, we're guilty of it too. At the very least, if they're not mentioning pattern matching then it's a good clue they're talking BS.

SAP's Hasso Plattner is slated to appear at SAPPHIRENow 2018 to deliver a keynote on 'the intelligent enterprise.' I sincerely hope that he reminds the audience that precision of meaning matters. It represents a tremendous opportunity to set customer expectations. He normally doesn't disappoint. If you're a business person, Plattner can be difficult to follow as he swerves readily into technical territory as you would expect of someone who wrote much of the original R/2 and R/3 code. But this is a chance to learn so that you can better position what vendors are selling.

We also need to be a lot clearer about where this goes. I've used the SAP example because it captures where many vendors are going with this new 'stuff.' It is a continuation of the road to greater efficiency - and hopefully profit - but it is anything but a done deal. Speed surely matters but we need to know that the insights on offer are truly useful, augmenting the human decision-making ability.

If anyone seeks to wrap all this up in the pretty bow of transformation then ask what they mean. This is vitally important because, without an adequate explanation, it is impossible to know where the change element of the story starts.

Finally, the ethics question is taking on increasing importance. As observers have noticed, a breathtaking demo is one thing but do I really want to 'talk' to a machine in the belief that is a human?

A grey colored placeholder image