Applying AI to cyber security is a force multiplier attracting big investors and customers
- Summary:
- Cyber security is always a top of mind topic in the IT department. Now we are seeing AI related methods being applied in the next wave of security techniques. How is this manifesting in the real world?
If noise levels are a good indicator, people can't get enough of AI, spanning audiences from the techno-elite to the Everyman non-specialist.
After more than 40 years in the technology desert where AI was mocked as "the technology of the future, and it always will be," AI has finally arrived. At least that's what the Hype Machine is telling you and me both and sure, you can certainly see how that makes sense.
The confluence of parallel computing platforms using GPUs, innovative new algorithms based on neural nets and massively scalable infrastructure available in the cloud have fueled an explosion of innovation and corresponding applications. Although much of the hype and more than a little PR AI-washing centers on AI's use for image recognition, natural language processing and business intelligence, the same strength in pattern matching that enables algorithms to 'see' a potential tumor or identify latent customer preferences, makes AI a powerful tool in cyber security.
As attacks have gotten more sophisticated, stealthy and devious, often by weaponizing leaked software developed by sovereign intelligence agencies, it's much harder for organizations to defend against. Even those with dedicated security teams using the latest products as part of a multi-layered defense often find themselves overwhelmed by data and false alarms, struggling to separate legitimate attack signals from streams of noise.
One survey found that "70 percent of security industry professionals believe threat intelligence is often too voluminous and/or complex to provide actionable insights." Another IDC study found that over half of respondents contend with 10 or more "actionable security-related alerts" a week, with some drowning under hundreds of such events. Although vendor-sponsored surveys are typically self-serving, they're much more credible when they align with anecdotal news accounts.
We all remember post mortems of the infamous Target breach that showed the company's IT staff ignored alerts from FireEye monitors the company had recently installed. Worse still is when managers fail to follow up when security teams warn of unusual activity, such as at Yahoo, where a recent 10-K filing revealed that (emphasis added):
The Company’s information security team had contemporaneous knowledge of the 2014 compromise of user accounts, ….. In late 2014, senior executives and relevant legal staff were aware that a state-sponsored actor had accessed certain user accounts by exploiting the Company’s account management tool. The Company took certain remedial actions, notifying 26 specifically targeted users and consulting with law enforcement. While significant additional security measures were implemented in response to those incidents, it appears certain senior executives did not properly comprehend or investigate, and therefore failed to act sufficiently upon, the full extent of knowledge known internally by the Company’s information security team. Specifically, as of December 2014, the information security team understood that the attacker had exfiltrated copies of user database backup files containing the personal data of Yahoo users but it is unclear whether and to what extent such evidence of exfiltration was effectively communicated and understood outside the information security team.
In situations like these, either where security teams are overwhelmed with unprioritized and often extraneous data or managers don't understand the significance of legitimate security warnings, AI can act as a force multiplier.
Whether by adding a powerful set of eyes to screen and filter security data, improving defenses through the use of reinforcement learning and predictive analytics or by applying machine learning to highlight significant incidents by predicting the severity and business implications of identified threats, software using a variety of AI techniques provides a significant advance in security technology.
Everyone's riding the AI bandwagon
AI has been an active area of security research and investment and the fruits of numerous projects in both startups and established vendors alike are coming to fruition.
I've previously detailed the work of Deep Instinct in using AI to create malware detection software that can identify previously unknown threats in real time for a raw stream of network traffic. I've also discussed SparkCognition, which uses a similar machine learning approach to malware detection, and SMFG which has built a neural network for fraud detection. These work by detecting obscure, latent patterns common to malicious applications, not by matching code signatures to a known malware database.
There are many other noteworthy examples of how AI is being applied to various security problems and their diversity illustrates the potential for using new algorithms for a variety of security challenges.
- Balbix tackles the problem of information overload and risk assessment by automatically creating an inventory of an organization's devices and applications, scanning them for nine different types of vulnerabilities and using the aggregated results to produce an overall risk profile. According to founder and CEO Gaurav Banga, Balbix creates "a heat map of an organization's attack surface" by visually prioritizing vulnerabilities based on machine learning models. Unlike many security products that are designed for specialists, Balbix attempts to provide useful insights for business executives by analyzing the business and financial impact of vulnerabilities and an organization's broad security posture and resilience.
- Cylance has a portfolio covering both device protection and security monitoring. PROTECT is an ML-based endpoint malware detection and prevention product in the same genre as Deep Instinct and SparkCognition Deep Armor, while CylanceV uses similar algorithms to quickly scan infrastructure and file shares for vulnerabilities. CylanceOPTICS is security monitoring software similar to SparkCognition SparkSecure that uses predictive algorithms to identify threats, speed incident response and improve forensic root-cause analysis.
- Elastic Beam does for APIs what other AI products do for endpoint protection by using AI to detect and block attacks against application interfaces. As more organizations expose services online, APIs have become a significant and often-overlooked vulnerability, whether to DDoS attacks designed to block legitimate users or cyber criminals seeking to use flaws in API implementations to steal data or login credentials. According to founder and CEO Bernard Harguindeguy, as a cloud service Elastic Beam can be dropped into any environment and it also uses captured API data to generate usage reports and compliance reporting.
- Shift Technology is another company applying machine learning to fraud detection, this time in the insurance industry. The company has been quiet since its last round of funding, however the company targets claims processors and according to its online ticker has analyzed over 80 million claims to date.
Startups aren't the only ones applying AI to security, as Cisco used machine learning as part of several modules in its new network security suite, including for encrypted traffic analysis and traffic analysis (DNA-Center). Many large companies choose to acquire AI security technology from startups, with several notable deals already in 2017:
- Amazon reportedly bought ai, which uses AI-based behavior analysis for DLP.
- HP Enterprise purchased Niara, which also uses intelligent behavior analysis to detect and stop security anomalies and HPE will incorporate into Aruba's (yet another HP acquisition) ClearPath security product.
- Sophos acquired Invincea, which like Deep Instinct uses deep learning along with behavioral monitoring to detect and block zero-day, previously unknown malware.
- Accenture purchased the federal government arm of Endgame, a company using machine learning to detect and block unknown attacks, along with behavioral analytics as part of its endpoint security suite.
My take
AI is a broad and sadly often abused term — the taxonomy provided by Loup Ventures is useful to understanding its various manifestations. Whether using deep learning for malware detection or machine learning for behavior analysis and risk assessment, the technology has broad applications to cyber security.
I don't believe Gartner is overstating things when it predicts that by 2020, "advanced security analytics [by which it means AI, ML and heuristics ] will be embedded in at Least 75% of security products."
Anyone shopping for security software and services should press prospective vendors on their AI roadmap, and bypass those without a legitimate, compelling answer..
Unfortunately, if we've learned nothing else from several decades of cyber attacks and defenses, it's that security is a never-ending process of response and counter-attack, where the evolutionary cycle of the aggressors is on Internet time.
Expect attacks to use increasingly evasive and stealthy techniques that use AI to enhance their effectiveness. Thus, it's unclear whether AI will tip the balance in favor of security defenses, however it's the best opportunity yet for gaining the upper hand in the fight to protect information, infrastructure and users.