The CIO of the US federal government has issued some words of warning about the demands that ‘big data’ and the Internet-of-Things could place on traditional systems, which he believes have inherent design flaws and are not capable of dealing with the threats or the workloads.
Speaking at 2016 ICIT Critical Infrastructure Forum, Tony Scott called for a rethink about how all the building blocks of a typical technology stack are designed, claiming that the systems should be self aware in order to deliver only the safe and most important information for analysis.
Scott fears that there is not enough capacity to be able to cope with the influx of data that will be inevitable if everything is connected.
He said that the principles that forced the components of system design down this route (commoditisation/standardisation/interoperability), have not resulted in systems that are suited to a hyper-connected environment. Scott said:
This all has to do with architecture and design. I’m not going to get into a long sort of engineering discussion here about how to go and build things with a lot of detail. Instead I want to talk about a reasonably high level concept. That is the fundamental nature of how we build things today, which I think is fundamentally flawed.
The basic architecture still relies on chips, operating systems, network components, storage. The basic building blocks of how we build applications and infrastructure hasn’t changed in roughly thirty or more years. All of this was designed at a time when we didn’t face the cyber threats we have today.
The dominant theme amongst the technology builders of these components has been massive interoperability, through the use of standards. We have fulfilled that promise big time. You can take almost anybody’s thing and have it interact with somebody else’s thing, because they all do standards in a pretty solid way today. Any storage can attach to any compute, any network can attach to anything else, it’s all great.
We need to rethink
However, Scott said that in today’s world, the question shouldn’t be about whether you can interoperate with other things in your ecosystem, the question is: do you want to?
Is it safe to connect with all these ‘things’? Are those ‘things’ what they represent themselves to be? Is the current model scalable across everything that needs to be done going forward? Scott said:
That’s the big issue that’s facing us, especially in this Internet-of-Things world.
The question more and more and more will be: should I interoperate? Is that thing a danger to me? Does it represent a risk? Those are all the critical questions that we are going to have to answer as we build out the infrastructure, architecture and ecosystem that we will enjoy great benefits from no doubt. But we also have to be worried about the other side.
Scott said that he is excited about the R&D that’s going on to help further create models and understanding in this area, but he also added that he thinks that more change is required. He wants to see more resiliency built into the building blocks in the first place, where he believes that these blocks lack some key features. He said:
The first is self awareness. By this I mean: am I healthy? Do I have all the pieces I need? Am I still operating in a way that I was designed to operate? Have I been compromised in any way? Am I being asked to do something that I shouldn’t be doing? Can I communicate to other things in the ecosystem about whether I’m healthy or not? Just like the human body, if there is something that does go wrong, can I call and get defensive help to come and heal? Or on the other hand, substitute somebody else in who is healthy.
Scott believes that these are the characteristics of “healthy building blocks to build systems of the future”. He said that these principles could be applied to networks, to storage and all other building blocks. Be he added:
I’m very worried about a trend that I am seeing - and by the way, I’m a big fan of big data - but I don’t believe that we can collect logs and analyse them for everything that’s going to be a participant in the Internet-of-Things. I don’t think there’s enough compute power or enough data science to do that effectively at really large scale. I believe that we can use those tools effectively for some things, but not everything on the planet. It’s just an intractable kind of problem.
So we need building blocks that are more capable of taking care of themselves, Of protecting themselves from bad things. Of reducing the amount of logs that get generated, so that only the things that are really interesting or really need to be analysed get generated. Instead of every single event.
In the long run we need more R&D for coming at and designing these things from a different design point than where we have come from in the last 15 to 20 years. I think this is something that we should be talking about.
By ‘aware’, Scott is talking about a level of automation that happens lower down the stack. Instead of looking at all the data you have, sifting through for threats, finding valuable insights and ditching the rest, he wants the most important data automatically surfaced by the system. And for the system to flag if anything is wrong at any point in time.
The likes of Splunk are already talking about this to some degree, but I’ve got a feeling it’s not the level at which Scott is making the case for. Also, with automation and intelligence brings a lot of questions about trust and reliability.