Microsoft Future Decoded - decoding the conflicting AI hype v reality

Martin Banks Profile picture for user mbanks October 3, 2019
AI is everywhere - not least front and center at this week's Microsoft Future Decoded conference in London.


AI has barely made it into practical applications, but it seems that every vendor in the world is claiming mastery of its many black arts, even if some have yet to realise they need to hire a guy skilled at walking in front of things with a red flag.

But others do have a real handle on the possibilities ahead. Microsoft for example, has already built up a wide range of tools and applications to cover a surprisingly wide range of bases. And even if you have AI hype fatigue, when it is being said that every company will be an AI business, every CIO needs to keep at least half an eye on developments.

These thoughts were l front and centre at this year’s Microsoft Future Decoded conference in London. After all, AI is going to be the future, there is plenty of need to decode it if users are to make any sense of where it fits with their world as it too develops and changes.

It is already following the traditional trend with new technologies, where the 'brand value’ is immediately extended downwards – in the case of AI to also include Robotic Process Automation (RPA) and similar approaches for automating the often simplest of tasks -  but there are also interesting signs of much richer thinking, which were also to be seen during both the conference sessions and in the associated exhibition hall.

There are indications that some degree of maturity is already starting to emerge around AI as a broad topic. As Microsoft’s UK CEO Cindy Rose said to delegates in her opening keynote session, the time has come for a growing number of them to be thinking about scaling as they move from pilot project and on to full production:

Businesses need to understand two things: are they still working to understand the technology, or have they identified a real problem and worked out how AI can help solve it. If it is the latter, all they have to do now is scale it to fit.

This is also the time that another, non-technical, problem will tend to raise its head. Can the business managers bring their staff along for the ride or, as the application moves from project to production, will they start to feel threatened by the wider implications and potentials?

One problem for Microsoft  is that many of its examples  and reference cases come from the healthcare sector and the NHS specifically. For a number of reasons, this tends to fall into the category labelled 'special’.  For example, when it comes to scaling, the size of each hospital puts an upper limit on the size required for any project, so scaling tends to be of the 'repeat ad nauseum’ variety. And when it comes to staff concerns about threats to jobs, the NHS's permanent state of shortages might tend to make them welcome any help they can get. Staff working the production line at an automobile manufacturer may well have a different view on such issues.


In a short video clip Microsoft’s CEO Satya Nadella raised another side issue fast becoming part of the bags and baggage surrounding AI: there is now a critical need for there to be broad agreement on an ethical and empathetic framework for the design of AI applications and services. Meanwhile, Rose welcomed the Prime Minister Boris Johnson's recent call for a Summit on AI in the UK next year. This would be important, not least because the consultancy PWC now reckons AI will produce an estimated $15.7 trillion input to the world economy in coming years.

Dr Chris Brauer of Goldsmiths University addressed delegates on the subject of accelerating competitive advantage in business through the use of AI. Several people, from politicians to philosopher and ethicist Dr Blay Whitby of Sussex University, have been quoted as suggesting that the UK is well placed to succeed in AI, but in his view the country now has to move very aggressively.

One of the problems with AI however is that it is not just a single, simple product, which makes such talk difficult to plan for and around. For example, Brauer quoted numbers taken from a new Microsoft report - Accelerating Competitive Advantage with AI - which states that businesses using AI have 11.5% performance advantage over those that are not. Unfortunately, there was no indication by what measure that number stood. Was it profit, cost reductions, gross revenues or productivity?

This even raises the question as to whether making such comparisons is remotely the correct way to look at the problem of where the best advantages lie with any move to the application of AI. It is quite possible that measuring such factors is like asking `what type of fountain pen do we need?’ when the real issue is about making legible, understandable notation. When an estimated 48% of the 56% of the businesses that have at least some AI in place are said to be still in experimentation mode, it is still far too early to speculate as to what will become the key metrics of success.

One of the fascinating, and arguably disconcerting, stats Brauer threw out was that the report also showed that while 96% of company employees were never consulted by bosses about AI, 83% of business leaders say employees have never actually asked them about AI. In other words, neither side is talking to the other, and each is assuming it is the responsibility of the other to initiate any conversation. That is certainly a bad situation that needs to be remedied.

Brauer called for the democratisation of AI, but that is unlikely to happen if neither managements or staff feel it right to talk to the other side. This is especially important as the potential capability of AI to change the very essence of how businesses operate out into the future, means there is in fact a need to consider the democratisation of the whole process of doing business, which could prove to be a complete different can of worms.


On the brighter side, Microsoft was able to demonstrate one or two of the good, even heart-warming examples of AI in action, even outside of the obvious area of healthcare. It was down to the company’s Chief Environmental Scientist Lucas Joppa to run through one or two examples of how AI can be used to care about people and the planet, and even included the company’s own efforts to minimise its own contribution by building sustainable campuses and data centers.

Microsoft has an AI for Earth programme which works through partnerships and grants to researchers. It has 436 projects running so far and contributes by curating data, providing infrastructure and training personnel in new algorithms and other technologies. One example is Ocean Mind. This UK-based non-profit uses AI, Big Data, Azure, and satellite imaging, amongst other sources to seek out illegal fishing operations – both large and small – around the globe, collating data from many sources to identify fishing boats operating where they should not. And because such operations are often far from `accidental transgressions’ of geographic locations, the perpetrators are quite ready to indulge in other activities of a nefarious nature.

So Ocean Mind now finds itself working to identify and help capture slave labour operators, who operate their boats on that basis. This, in turn, has led to working against those who would use such vessels for the people trafficking trade: a rather fine example of not only doing good, but also why using simplistic performance metrics may be no guide at all to the value that AI systems may add to the stock of human kindness over the long haul.

My take

Just at the moment just about everything in and around IT has to have an AI component to it, and some of it is, let’s face it, nonsense.  But what that nonsense helps to demonstrate is that there is real value to be had in what AI may offer in the future, and that the real developments in AI are yet to happen.

So CIOs have to, once again, adopt that contrary position of cynical enthusiasm, or perhaps enthusiastic cynicism - wanting to believe every word about it, while knowing that's not possible. Much of it, for now, is just bunkum, albeit not told with any malice.

But they can still be wrong – in the same way the early days of the cloud were beset by arguments between the private and public cloud camps when all the while the answer was hybrid.

My takeaway from Decoded - don’t get sucked into the hype too early, but do be ready to pay attention when a real need can be both identified and met with an AI answer.

A grey colored placeholder image