Strategic failures mean AI isn’t burning bright, says Genpact’s Tiger

Chris Middleton Profile picture for user cmiddleton November 7, 2017
Summary:
Ill-considered AI adventures could backfire on organizations as badly as poor outsourcing, warns outsourcing leader Tiger Tyagarajan.

tiger
Many organizations are rushing in to AI and getting it wrong, according to Genpact CEO ‘Tiger’ Tyagarajan.

‘Here it comes ready or not’ has been the message from multiple AI and robotics reports this year. Think tanks, analysts, and academics alike have described a world of increased automation sweeping aside low-skilled jobs and transforming the workplace with intelligence and assistive technology, freeing up human employees to be more focused and creative – if they have the skills to move sideways and capitalise on new opportunities.

There’s no doubt that the transformation is coming and that new opportunities will be there for the taking. For example, Forrester Research predicts a 300% year-on-year increase in AI investment in 2017, while IDC estimates that the AI market will surge from $8 billion in 2016 to more than $47 billion by 2020. But does this financial boom offer any real substance for businesses?

In previous reports, diginomica has cautioned that any ill-considered, tactical move to adopt AI could see it become this decade’s offshoring. In the Noughties, some over-enthusiastic outsourcing clients were forced to repatriate their contact centres after a consumer backlash against services being relocated to India, Vietnam, or the Philippines.

Those organizations had rushed in and failed to think the opportunity through or map it against their own core values. The same risks may apply to AI this decade, Any organisations that rush in for the wrong reasons – to grab an apparently easy cost-cut, for example – could find themselves rushing out again in the near future, pursued by angry customers and disastrous social chatter.

So it’s interesting that one of the most outspoken industry figures on both the promise and the risk of AI should be the head of a business process outsourcing (BPO) and professional services giant: NV ‘Tiger’ Tyagarajan, president and CEO of Genpact. Despite being an advocate for AI and robotics, Genpact’s own research data suggests that most AI investments will fall short of their goals, with only one quarter of C-suite executives receiving a significant payback from their organisations’ investments in the technology.

A 75% payback failure rate for AI deployments doesn’t sound promising, so what’s going on? Tiger says:

There’s a lot of hype around AI, and that starts in the boardroom and the C-suite. People are asking, ‘How do we leverage it?’ Part of the problem, I think, actually starts there. There is a rush to say, ‘Let’s use AI’ in more areas than one should be focusing on right now.

Obviously when that happens, there is a lot of disappointment, because the results and the value that people expect don’t come through. And this is because there is a lack of proper prioritisation about where decision-makers ought to be bringing AI into the company.

And the second problem is that there is a lack of understanding that deploying AI, and creating value from it, requires more than just bringing in the algorithm or the technology. But despite all this, we continue to see huge opportunities for AI to change the way work is done and change the way that business models work.

If true, does this suggest that many organizations are, indeed, deploying the technology tactically in a competitive arms race, rather than strategically in support of clear business goals? And is there a strategic vacuum in the boardroom, where these problems seems to originate? Yes and no, says Tiger:

That’s part of it, but there’s more to it than that. There is pressure from the top and, as a result, CIOs, functional leaders, and so on, are being forced to answer the question, ‘Show me what you’re doing with AI.’ So they are rushing into things. But part of it is also that the world is still learning how to use these tools, so I don’t think there’s a cookie-cutter answer. This is still the early stage of the evolution, and everyone is still experimenting.

But I think the folks who are getting the most value from AI are the ones who are stepping back and starting with, ‘What are the big problems I’m trying to solve? What are the strategically important things – either because something is a big value destroyer or a big value-creation opportunity?’ And they’re also asking, ‘In which of these areas can I bring AI in? How do I bring it in? And what are the tests that I need to do to show that it will actually create value?’ These are the companies that are seeing more success than the others who are applying it on a whim.

AI cannot be implemented piecemeal, says Tiger; it must be part of the organization’s overall business plan, together with aligned resources, structures, and processes:

How a company prepares its corporate culture for this transformation is vital to AI’s long-term success. That includes preparing people by having senior management that understand the benefits of AI; fostering the right skills, talent, and training; managing change; and creating an environment with processes that welcome innovation before, during, and after the transition.

And there’s another dimension to this: AI is meaningless without big, clean, deep, up-to-date, and well-organised data to work from. Simply throwing technology at a poorly defined problem isn’t going to reveal a wealth of new business insights when there is too little data, or the data is superficial or badly organised.

These strategic challenges aside, a question that is rarely asked is whether there are business functions in which AI applications are typically more mature than others, and can demonstrate real value. Tiger argues:

My perspective would be that where AI works the best and where it doesn’t add value has less to do with whether the technology is mature, and more to do with whether the leaders have bought in and whether they are going to drive the change.

Research findings

To answer this question, Genpact carried out its own research in Summer 2017 among the C-suites of over 300 leading companies worldwide. It found that the business leaders of successful AI applications typically exhibit very different behaviour in their strategic management of the technology to that of the ‘laggards’ (to use Genpact’s term).

Companies that are identified as AI leaders are more likely to be top performers in productivity (55% vs nine percent of laggards), profitability (51% vs 21%), and the ability to adapt to evolving market conditions (55 per cent vs 16 per cent), according to Genpact.

The research provided four key takeaways for successful AI applications. These are:

Prepare workers to work alongside AI

Employees with the analytical skills necessary to work alongside the machines will be more valuable than ever in the future, while those that lack the ability, education, training, or will to capitalise on new opportunities could be cast aside by the technology.

This backs diginomica’s own analysis and the findings of several 2017 AI/robotics reports, which suggest that skills and education will be the main battlegrounds rather than mass unemployment in simple terms. As AI and automation sweep aside low-skilled jobs or repetitive tasks (especially in regions that have already been impacted by the loss of traditional industries), governments and business leaders must ensure that workers have the skills to take up new challenges and opportunities.

On this point, Genpact’s research reveals some troubling statistics: although 82% of respondents plan to implement AI-related technologies over the next three years, only 38% say they currently provide employees with reskilling options. That’s the wrong approach: AI shouldn’t be seen as an alternative to people; it’s there to assist human beings, and that demands a new investment in skills.

Don’t let AI be its own boss

Many AI initiatives fail because of a lack of proper governance, says Tiger, who believes that businesses and governments must work together to oversee the responsible development of AI.

Don’t try to boil an ocean

Most organizations are excited by AI’s potential, but some are over-ambitious and try to bring it in right across the enterprise from day one. Instead, Tiger believes that they should identify “their most crying problem” and develop and implement the right solution to solve it. He adds that many notable AI failures, like Microsoft’s Tay chatbot, lacked clearly defined parameters and specific problems to solve. A rogue bot creates entirely new problems.

Arm data with industry context


AI solutions can be highly effective, but only if two key conditions are satisfied. First, companies should apply it within the context of the business and a specific challenge – what Tiger calls “domain depth” – and second, companies need to provide AI solutions with continuous access to data.

All of this begs the question of why some organizations are more resistant to AI than others. Tiger offers his own perspective on why this may sometimes be the case:

One of the surprising things that we found when we surveyed 300 of the C-suites in leading companies was that the resistance to change that AI and machine learning can bring about is felt more among the senior leadership teams than the middle and junior levels. This is because some of those leaders – chief risk officers in financial institutions, for example – fear that their power centres, their reasons for existing, are going to be undermined by a bunch of machines that may get cleverer and cleverer.

And there are other reasons, such as the risk officer who is scared that the machine will make a mistake, or that he won’t be able to answer the regulator who asks, ‘Why did the machine make this decision?’ A lot of AIs today are black boxes.

But is that kind of caution such a bad thing?

My take

This ‘black box’ aspect of AI has been much discussed by diginomica in previous reports, and – when combined with conscious or unconscious bias or flawed training data – it can have the effect of automating discrimination or replicating existing problems, while at the same time maintaining the same air of mystery as the oracles of classical mythology.

Black box solutions are, by definition, inscrutable and, without greater insight and transparency into their inner workings – into why particular results have been arrived at – organisations could be storing up a world of potential problems for themselves: regulators who are unable to get satisfactory answers to their questions, for example, or customers appealing against machine-generated decisions. In this sense, some organizations are right to be cautious, and all should ensure that they don’t rush in with neither a strategic plan nor a real problem to solve.

This is a more serious issue than it might first appear, because some industries – financial services, for example – are already so highly automated that they are essentially machine-policed compliance systems with a human front end. In some cases, senior managers are already unable to intervene when “the computer says no”, and so in any future world in which inscrutable black boxes are allowed to take over, organizations may be forced to backtrack and explain themselves – not only to their customers, but also to their regulators.

Loading
A grey colored placeholder image