Machine learning, the dark web and cybercrime - an unholy trinity

Martin Banks Profile picture for user mbanks December 7, 2017
Summary:
What should we expect in the on-going arm-wrestle with cybercriminals as malware gets smart, devious and  hard to find?

mobility-1242714
Most people know about IoT, at least to the level of the need to monitor and control the hundreds and thousands of individual sensors and devices that go to make up a complete process or system. Most people will have also heard of ransomware. Many, either as individuals or IT staff in businesses, will have seen first-hand the consequences of an attack. Now stick the two ideas together.

That, in the opinion of Derek Manky, Global Security Strategist for Fortinet, is what is coming down the line at us all as the next major security threat. The combination will come in the form of Hivenets and Swarmbots, and the results could be far more targeted and focused attacks, based not on the basic process of breaking into a system with one malware exploit and launching an attack. Instead it will be based on inserting untold numbers of Bots into systems to observe the activities and identify the weak-points and collectively decide where and when to attack.

Manky outlined this attack model at the recent seminar Fortinet held at its Sophia Antipolis facility just outside Nice, where he discussed some of the cyber security threats he sees coming though over the next 12 months or so.  The idea behind the new developments will be no surprise to those of an entomological persuasion, for the idea is to get as many small Bots onto systems and communicating in the same way that insects do when they are in a hive or when swarming. And to humans this becomes difficult to identify, says Manky:

Humans are slow when it comes to humans vs. automation, and there’s more of this automated, day-to-day stuff, around. So you now have to offload it into your information technology where you can use some intelligent decision making tools on the data. Thankfully we haven’t seen anything completely destructive; it’s a lot of proof-of-concepts. For example, we’re seeing huge attack activity of Reaper right now, but it’s just printing off some test tricks. It’s not doing anything malicious. But it is actively that is hooked up to a command and control system, and all they’ve got to do is start developing a payload.

This approach takes the existing Botnet approach, sometimes referred to as individual zombies that do nothing until instructed to, and inflate it enormously so that there can be millions of them. They are all interconnected so communicate in much the same way as a swarm of insects. This is the swarmbot, and because they are communicating they can act collectively and become smart.

And in the same way that a hive can give a swarm of insects a command to implement in ways that suit its circumstances, the swarmbot can decide on its implementation and timing based on local intelligence as it travels and grows. And it grows by compromising more devices as it goes. This, he suggested, will give them the collective capability to launch multiple, simultaneous attacks around the world. In Manky’s view malware such as Mirai and Reaper have reached the point where they have sufficient footprint across the user community to become a significant security threat. All they require, he suggests, is a code upgrade to start swarming and delivering payloads.

The company has stated that in one quarter, earlier this year, it recorded 2.9 billion Botnet communications attempts across all types of IoT devices – including end user mobile devices - that are the precursors to the start of swarming capabilities. And because it is capable of collectively identifying and targeting specific processes, this approach opens up the possibility of not just ransom attacks but direct extortion and destructive actions.

The Hives work on decision trees which allow it to blueprint all the devices resident on a network, and one of the key problems with much of IoT is that these devices are inherently trusted, explains Manky:

They’re not segmented properly on our networks. You see an attack code that is able to do fingerprinting, see what sort of device is on the network, what platform that device is on. Then the intelligence in the swarm is saying `ok, I fingerprinted you, I know that you are this device. So the best way to actually own you, capitalise on you, is to deliver this probability and this payload.’

Earlier this year we saw a proof-of-concept called 'Burkabot'. This was flashing in ROM on an IoT device....it was nearly turned on. If you think about an IoT-based network that has access to millions and millions of devices out there and they hit that switch...that is the attack surface problem. Twenty billion-plus devices and it continues to grow – they are the weakest link for attacking the cloud.

CSPs – the next big target

The corollary of this technology development will be, he predicts, a shift in ransomware targets to the Cloud Service Providers themselves. And with a predicted growth rate of 19% the CSPs are expected to generate cumulative revenues of over $160bn by 2020. That is an obvious target, especially if the hivenet and swarmbot approaches can find a way in.

He expects to see these being combined with AI technologies to direct multi-vector attacks that are likely to include not just encrypting system and customer files but also the threat of destructive payloads that could stop a CSP from operating for some time while complete systems are reconstructed.

What is, perhaps, even more important is that many major cloud services, which end users might assume are hosted in purpose-designed data centers, are in fact themselves often hosted on one of the major global CSP operations. If a globally directed, AI-managed swarmbot attack breached one of those service providers the threat could have far-reaching consequences.

AI is likely to be an increasingly important underpinning technology to cybercriminals across the board. For example, it is expected to underpin the development of tools that can map networks and identify weak spots in a security regime and then organise resources for an attack.

It is also expected to underpin developments in polymorphic malware and take it beyond the current approach of using pre-coded algorithms to change the form and signatures of malware so that detection by anti-malware tools becomes much more difficult. It is reckoned such approaches can produce more than a million variations of a virus in a single day.

Adding AI to the mix, however, will allow the malware to start creating new forms of attack that are customised to hit weak-points that the AI has identified. 

That customisation could create attacks that are specifically tailored to avoid detection by normal cyber security defences. So systems could be hit and know not from whence came the attack.

This means that the core infrastructure of major cloud service providers could become the primary target for cyber-criminals. And the type of attack could move from temporary encryption of data – both the CSP and its customers – to one of at least the threat of direct and permanent damage to that infrastructure.

The approach will apply even better to many legacy network infrastructures currently operating in enterprises both large and small, especially those that were designed to work in isolation but now find themselves connected to the wider world. Manky suggests this will require many enterprises to engage in significant, and rapid, redesign of their core network infrastructures.

As a final irony, Manky also observed how the criminal community are using the dark web to develop cyber-crime services that are then available to their `compatriots’ on the dark side. One such is known as FUD (not IBM’s famous Fear, Uncertainty and Doubt but the exact opposite, Fully Un-Detected). This is a service that allows cybercriminals, for a fee, to test their attack malware against the best defences security tools vendors have to offer. In return they get a report on the performance of their planned exploit.

This information is then fed to machine learning systems that refine and optimise the attack code for more quickly and effectively. The idea is to not only speed the delivery of attack code from concept to implementation, but also to make it more malicious and more difficult to detect.

My take

It goes without saying that it was only a matter of time before the bad guys picked up on the possibilities of AI and machine learning to move cyber criminality to a new level, and so it will probably become inevitable that the same technologies will be used to defend business systems. And one of the key ways this will have to happen is in using them to learn how to stop humans being humans when it comes to using IT systems.

We may never lose that instinctive reaction to something bright, sparkly and new that spawns the standard reaction of, "Ooh, that’s interesting, I’ll just click on it’. 

If we can’t stop that, and many other behaviour patterns, then the systems will have to do it for us – for those clicks are the most common way in for an attack exploit. And from now on that exploit will not just act, or wait for a while before acting. Now it will quietly propogate, and look, and learn, and mutate accordingly. Then, when the time is right it will pounce, and trying to defend against it then will be too late.

Will that stifle us, that age-old cry against stifling 'innovation’? Yes, maybe a bit. But perhaps rather than clicking on a random sparkly link in an idle moment at work, do something different. Maybe even think about why you were idle and what could be done with a business process to make it work better.

Loading
A grey colored placeholder image