Something of an IoT Primer – Part 1
- Summary:
- In part 1 of a two-part overview of getting some traction with the Internet of Things, Accenture’s IoT hot-shot, Craig McNeil, talks to diginomica about how to bypass the assumption that the first – and only – step is to get to the end
While Den Howlett put his tongue firmly in cheek to point up key issues impacting acceptance of Internet of Things (IoT) as a viable technology concept, Accenture takes an understandably different view.
For many potential users of the Internet-of-Things (IoT) it is highly likely that their views on the subject neatly divide in two: the results, once the sensors, analytics and management tools are all in place, are probably going to be brilliant; but on the distaff side they have little or no idea yet how to get there.
When it comes to those enterprises running complex industrial processes, that distaff side is likely to appear as both a large and very risky mountain that they need to climb. Does it require a full rip-and-replace of sensors and monitoring devices, or will continuous manufacturing processes need to be stopped so that can happen?
A chat with Craig McNeil, Managing Director of Global Internet of Things Business with major consultancy, Accenture, showed that getting some traction in exploiting IoT can be a good deal easier than many might suspect. Indeed, most businesses have already acquired most of what they need, but don’t know it yet.
That, in turn, means that IoT is a subject that many enterprises will be able to sneak up on by starting small, no matter how big the enterprise might be.
The primary issue is almost at the conceptual level – embarking on an IoT development can look like an all-or-nothing exercise that involves a major architectural shift. When there are many thousands of end points in play they can’t be managed individually, so they have to be in logical sets of some description. It has got to be nodal in architecture, which means that most of the sensors and monitors may have to be replaced because their outputs cannot be readily integrated into an overall IoT architecture.
But architecting at that scale has the potential to reduce future flexibility and agility, where users get trapped with an architecture that no longer makes any sense in the future. McNeil agrees, but only up to a point.
Yes, that is in the right ballpark. The modular/nodal concept is certainly real for most of our clients on the industrial side, and we also see it on the non-industrial, such as in retailing. They are all having to aggregate and consolidate.
We’re a member of the Industrial Internet Consortium, and we have to work with a market where there are lots of different standards and protocols and lots of companies developing kit right now that does not comply to any particular protocol. This is obviously a downside of the early days of any development.
We look to the IIT Reference Architecture to help provide that guidance on how various sensors and devices can work together, and if you keep your attention on those guys that are driving it then you don’t have to worry too much about architecting it perfectly because you will be doing it in the future in line with what the leaders in the field are doing.
Standards don’t equal rip-and-replace
This would seem to suggest that a lack of IoT-compatible standards means that applying IoT is going to be fearsomely complicated, extremely expensive and difficult to retrofit onto an existing network of sensors, especially if they are all different and built to different standards for different reasons. The consequential thought is that this either means architecting a very complex one-off solution or getting involved in a major rip-and-replace exercise.
I think it is reasonable from a theoretical standpoint. But I don’t see it happening with our clients and here is why. I think we get caught up in the idea that there is a world with IoT, or one without. And if it is with IoT then I have to produce this great business case, or I am in jeopardy. The reality is that, because it is so complex and with so many parameters, where we say we will achieve this or save that amount of money, it is very difficult to do that across and entire business.
The answer is, therefore, to set one’s sights a good deal lower. Using a manufacturing plant as an example, he suggests that just one plant – or one process line – is targeted. Then take some data from that, put it in the cloud and analyse it for a couple of months, looking for causality and trends based on that data.
This is not something that is going to cost millions to do and can be done in weeks. Then we try and build a business case for the client based on that real data, and relevant to their situation. In eight out of ten times, it is very simple to make a business case based on that real data.
Once that is achieved, it becomes possible to add another shop floor or process and then another. But this slow acquisition of manufacturing processes is still quicker than attempting to do it for everything straight away because that takes time, and it takes working with real data to build the business case.
When it comes to the threat of rip-and-replace, McNeil suggested that there is now enough interest and investment in IoT that start-ups are now coming along with new approaches to monitoring that are specifically aimed at existing, brownfield sites.
New devices are starting to appear that can be stuck on the outside of systems to monitor their activity.
They can, over time, learn a lot about the state of a machine or component just from its vibration, sound or temperature patterns. They can use long-life batteries so they can then report back via WiFi.
And while they cannot directly control the component, other systems can if they have the right information. He is also seeing new monitors that can plug into existing ports on machines and sensors that interpret the data output in its native form and transform it to something more compatible with IoT management tools.
This is, by definition, something of a temporary marketplace, but he expects `temporary’ to stretch to 25
years at least. And a lot of the systems being monitored cannot be stopped, they are continuous processes, so the availability of add-on sensors that can last the rest of the planned lifecycle for that system, will be a real advantage over the long haul.
And while there are a lot of protocols out there, it is not as if there are thousands of them. For example, in the manufacturing business there are less than ten machine protocols being widely used, and it is the same in each sector of business. If you ask the guys that engineer these things, there are very specific times when you use one protocol over others. So the device manufacturers are simply throwing them all in. And we saw that in the early days of WiFi and even computers too. The ports in my computer are very different from the ports that were in my last one.
And I’m not advocating that protocols should stay industry-specific and siloed, for some of the real benefits will come when some of them become baselines across all industries.
Rule one: test
So for any business the start-point is to work with real data, and on the industrial side of IoT, most companies, make at least some use of automation, and the machinery that automation has been kicking off data into silos for years. So the important question for many of them is what they are doing with that data?
Many don’t even store it at the moment, and that is a challenge in itself. What comes next is the need to do some compute on that data and understand what the answers mean. This has not been happening on the industrial side.
McNeil’s recommendation is that they test on that data collected from just one small production line or process in the business. He sees it as almost impossible for any enterprise to look into a crystal ball and know where to go. So they need a way to do testing on smaller parts of the system before getting involved in a multi-million Dollar investment.
If you can test with real data it helps you home in on where you need to focus.
One of the interesting side issues for many enterprises is therefore going to be trying to find out how much potential IoT is already in-situ without them realising it is there.
He acknowledged that Accenture’s experience suggests that in some companies there is a real lack of knowledge on what data is available from sensors and monitors, or how to access it. More commonly, the data is being collected and stored but it is not being networked in any way.
But as yet the company doesn’t have a feel, across the board, for the volume – or value – of the data that it lost, or at least not being exploited. But he is aware that it is probably huge, if only from the knowledge that has been built up about some specific market areas.
In smart buildings we know enough about how clients are managing buildings, particularly in areas such as energy management. And we can now determine that, for a given size of building, exploiting the data could give a saving of energy and be specific about the percentage. It is still hard to do across the board because the amount of data out there is still a bit of an unknown.
In Part 2, McNeil talks about proving IoT’s effectiveness and that all-important metric – the Return on Investment.
My Take
Moving to IoT obviously requires some degree of a top-down drive from senior management, but that does not mean it has to be a top-down, all-or-nothing business-wide endeavour. It can be sneaked up on, and most of the tools to do that already exist for most enterprises.