After all the hype, 2020 will be the year that automation and Artificial Intelligence (AI) technology starts moving out of experimentation mode and into more serious levels of adoption, believes Forrester Research.
But the picture the market research firm paints is decidedly mixed for tech leaders and buyers. Here are four of the top forecasts from Laura Koetzle, the company’s vice president, group director and head of research for Europe:
Prediction 1: The global RPA services market will be worth more than €6.3 billion, but automation will lead to at least one major labour strike in 2020
The robotics process automation (RPA) services market has grown over the last few years, mainly because organisations have focused on tackling simple challenges and undertaken projects focused on ‘low-hanging fruit’. But to move to the next stage, it will be necessary to build “automation strike teams” and centres of excellence in order to put more structure around such initiatives, believes Koetzle.
She also warns that in 2020, it would be “incumbent on all tech leaders” to “develop and promote a positive vision of the future of work”. In other words, it will be up to them to clarify how technology, which includes RPA and AI, can be used not so much to automate jobs out, but rather to help employees undertake their jobs more effectively.
This means organisations should implement robots to undertake manual, repeatable tasks and up-skill humans to do what they are good at, such as things like exception-handling. Failure to do so will have inevitable repercussions. As Forrester’s European Predictions 2020 report explains:
Folding automation into the enterprise will not be without backlash: Unsurprisingly, employees are wary of automation. Very few businesses have invested in prepping employees for the future of work – what it means to be working with, alongside, and potentially for automation. We expect a major strike will cause a PR nightmare for at least one Fortune 500 company.
Prediction 2: Four out of five chatbot-based customer interactions will continue to flunk the Turing Test, which means they should be tested out on employees first
Lots of organisations have introduced conversational AI software and chatbots to cut the costs of their customer service functions and reduce the strain on staff. But while Koetzle believes firms should continue investing in such technology as it will be “very beneficial in future”, she notes that most customers’ experience of it today is “generally bad” and can therefore be summed up as “frustrating”.
As a result, she advises companies not to “inflict them on customers” but instead to use them in employee-facing activities first. Internal staff, Koetzle said, tended to be “more tolerant” and could also be asked to train the chatbots. Doing so would lead to the creation of a suitable knowledgebase that could, over time, be more effectively employed in customer scenarios than simply diving right in.
Put another way, Koetzle’s advice here is again to “use AI to augment staff, not replace them”. In this context, she cites the example of French telco’s Orange Bank, which uses so-called AI-based ‘whisper agents’ to listen into the conversations that its call centre staff conduct with customers.
The agents then provide these employees with cross- or up-selling suggestions as well as recommendations for how best to go about diffusing potentially difficult situations. As a result, not only do customers receive better service, but staff satisfaction also improves.
Prediction 3: Mass data collection will lead to 15% growth in the adoption of anti-surveillance technology in 2020
Some groups of citizens will start reacting to rising levels of mass data collection by both private and public sector bodies by using anti-surveillance technology to protect themselves against what they see as an invasion of their privacy.
Indeed, Forrester divvies people up into five key categories based on their attitudes towards privacy. These consist of:
- Conditional consumerists, who are comfortable sharing personal information with companies as long as they get something in return (18% of the total);
- Reckless rebels, who share their data widely and take few or no precautions to protect it (25%);
- Data-savvy digitals, who are very protective of their personal information and only share it to purchase a product or service they desire (17%);
- Nervous unawares, who are worried about their privacy but do not know how to protect themselves in this regard (17%);
- Sceptical protectionists, who are very concerned about privacy and refuse to share their personal information under any circumstances (23%).
In order to cater to the fears of this latter group in particular, the first line of anti-surveillance clothing made its appearance at the DefCon cybersecurity conference in Las Vegas this August. Dubbed Adversarial Fashion, it was designed by hacker and fashion designer Kate Rose in order to confuse automatic number plate recognition systems - and is something that Koetzle describes as both an “information security menace and a fashion opportunity”.
But she also warns tech leaders operating in business-to-consumer environments to be aware that some customers will take other, more direct action if they find themselves subject to manipulation by ‘dark patterns’. According to the awareness-raising Dark Patterns website, these are “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something”.
So to counter such activity, consumers could well start deliberately feeding “garbage data” into offending companies’ algorithms in order to corrupt their information feeds, Koetzle cautions.
But tech leaders in business-to-business companies will also need to take a more cautious stance. Enterprise customers will increasingly require an explanation as to why their data is being collected and what is being done with it, especially in an AI context. It will also become common to have to justify how AI technology is being employed, or risk blanket refusals to share data and even lose customers.
Prediction 4: Deep Fakes will cost businesses more than E225 million in 2020 alone
Deep Fakes - or videos in which an individual’s face has been either swapped or digitally altered using AI software – are likely to become more widespread and will be used to cause PR problems for brands, for example, by damaging the reputation of celebrities fronting their businesses.
Although the technology to create deep fakes “only escaped the academic labs a couple of years ago,”says Koetzle, it is already possible to subscribe to ctrl shift face, a website that creates them for between $1 and $10 per month, and more such sites are expected to follow.
But a key problem with the whole deep fake phenomenon is that not only is it embarrassing for the celebrity or company concerned, but it also “degrades trust” among the public and makes it harder for legitimate companies to be seen and heard.
As a result, Koetzle predicts that 2020 will see the first real efforts in trying to address the situation. At the same time, she also advises tech leaders to make themselves familiar with free and low-cost deep fake tools to understand what they can do and how they could be misused.
She likewise suggests adding ‘deep fake-generated crises’ to incident response and crisis communication planning documents to ensure organisations are prepared should trouble strike.
It seems clear that the disruptive nature of AI and automation software means that organisations must be sensitive in how they implement it - or face the potential consequences. They must also take steps to guard against how it could be used against them too.
In other words, like all technology, it will have positive and negative repercussions depending who is using it and in what way – only by the nature of the beast, the repercussions this time could prove to be an order of magnitude higher than anything witnessed to date.