Security Analytics Myths and Realities

02 Sep

Security Analytics Myths and Realities

in Blog, Industry News, Perspectives, Technology

by Karthik Krishnan

The hedgehog knows only one thing but the fox knows many things. – Archilochus, 700 BC

While attending a recent trade show, it became clear to me that much of the misunderstanding people have about what security analytics is (and is not) capable of is perpetuated by security vendors. At the show, attendees were flooded with signage proclaiming fantastical feats of analytics and machine learning – promises of what each could do to help relieve customers of their security ills:

  • User behavior analytics for 99.99% false positive reduction
  • User behavior analytics on authentication logs such as Active Directory (AD) and virtual private network (VPN) alone will automatically detect compromised users
  • Automatic, real-time threat detection using machine learning
  • SIEMs promised to aggregate disparate data sources and perform analytics on it and failed miserably. Therefore, the problem must be in looking at multiple data sources. We will focus on just network traffic, apply machine learning to it and detect breaches in real-time.

These claims are laughably incorrect, and even quite dangerous in selling a false bill of goods. Such proclamations lead to unrealistic expectations of what’s actually being delivered in a solution, setting up customers to fail in their security analytics initiatives.

To understand why, let’s first look at the world of analytics using a recent real-world attack. The table below walks through the attack, outlining several stages the attacker went through, the purpose of the stage and the analysis and data sources needed to gain insight into the specific attack stage.

A Recent Attack Deconstructed
Attack stage Attack stage purpose How a security team could gain insight into this attack stage Data source needed for analysis
Stage 1: Attacker spearphishes an employee Gain foothold into employee’s machine Email traffic analysis for signs of spearphishing Ingress/egress email payload
Stage 2: Attacker gains access to employee’s machine Access tools to help conduct internal reconnaissance and exfiltration of passwords In to out payload analysis for signs of anomalous downloads Ingress/egress payload
Stage 3: Establish contact with a dynamically assigned command and control server Establish connection to receive instructions DNS analysis for signs of machine-generated DNS resolution requests DNS logs
Stage 4: Download XOR encoded shell commands via HTTP POST Receive actual instructions In to out payload analysis for signs of shell commands in HTML comments Ingress/egress traffic
Stage 5: Log into domain controller Access database that holds AD usernames and password hashes of several user workstations User behavior analysis of AD activity for signs of deviations in behavior AD logs
Stage 6: RDP to other user workstations Move laterally, gaining access to more information User behavior analysis of machine to machine connection analysis for signs of deviation NetFlow
Stage 7: Exfiltrate data Exfiltrate data obtained from reconnaissance via FTP In to out traffic analysis for signs of exfiltration Ingress/egress traffic

Unfortunately for organizations, the multi-stage attack detailed above is increasingly the rule rather than the exception and multi-stage attacks are becoming the norm. To explain why so many breaches are happening, in spite of organizations heavily investing in cyber security technology, I’ve outlined four top line challenges with detecting threats on the inside of the network.

  • Weak signals: Most signals are weak, and generating alerts as a result of these weak signals creates a deluge of alerts that Security Operations Center (SOC) and analysts cannot keep up with.
  • Underfitting: Real-time detection of multi-stage attacks using machine learning suggests that one can rely on machine learning to identify an ongoing breach by flagging variances for any single stage during this kill chain. However, if alerts are generated for every individual event, you could have thousands of alerts each day, many of which will be false positives. This problem is called “underfitting,” which occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. It is often a result of an overly simple model, showing high bias and low variance. Your SOC or analysts would be chasing after ghosts through the deluge of alerts.
  • Overfitting: The alternative to the above is to build a complicated model that takes into account multiple stages by using machine learning to detect variances along the kill chain. This route only creates an alert if there is a combination of command and control activity followed by internal reconnaissance. A more complicated model, it often lends itself to “overfitting” – meaning the model or algorithm fits the data too well. Overfitting often occurs when models are excessively complicated, showing low bias and high variance.
  • Variable data sources: To get a handle on what might have happened in the previous attack, as laid out in the table, would have required the victim’s behavior to be analyzed across four data sources – packets, flows, AD logs and Domain Name System (DNS) logs. To say that meaningful investigations can happen without the proper range of data sources needed for real visibility is simply not true.

Given these challenges, is an effective solution feasible for the detection of advanced attacks on the inside of the network? The simple answer is yes. Analytics can play a pivotal role in shedding light on threats and attacks, and in mitigating the above pain points. But organizations must stop trying to automatically detect these attacks in real-time. Doing so means raising alerts for variances seen during every stage of the kill chain, thus compounding the alert white noise problem for analysts, rather than mitigating it.

Gaining insight requires complex tracking and analysis of multiple weak signals applied to diverse data sources and attributed to a specific user over weeks and months, just to comprehend what might have happened and who the compromised user might have been. Not unlike the fox.

Nate Silver in describing his new website “FiveThirtyEight.com” upon its launch alluded to how his data journalism website intends to be more like a fox in taking a pluralistic approach to contribute to our understanding of the news. We believe that security analytics is no different and also needs to be more fox like in its approach. Security analytics is not just about making bold proclamations based on spurious extrapolations from limited datasets but needs to take a more holistic approach to advance the understanding of security analysts – whether that be in support of their incident investigations, alert prioritization or compromised user detection needs.

 

Tags: Blog, Industry News, Perspectives, Technology