For cybersecurity success, double-down on developing better detectors

The basis of threat detection inย security operations centresย over theย pastย decadeย hasย predominantlyย relied onย rules, commonly defined by SIEM vendors.ย Theseย โ€œrulesโ€ (also called alarms, alerts or use cases depending on the SIEM vendor,ย or asย we prefer to callย them โ€œdetectorsโ€)ย then generate alertsย thatย tellย analystsย that there is a potential threatย andย helps to produce data that can unearth trends in what types of attacksย are common at any given time.ย In theory,ย detectorsย should improve cybersecurity efforts.ย ย However, the sheer volume ofย alerts the average cyberย securityย team have to deal with is oftenย overwhelming.โ€ฏย ย 

In a recent Ovum study of cybersecurity challenges in the global banking sector,ย 37% of banks said they receive more than 200 000 alerts per day. For 17% it was in excess of 300 000. This barrage of alerts is causing a โ€œsignal-to-noise ratioโ€ problem. Inevitably, security professionals miss threats simply because there are too many alerts to attend to.ย 

Itโ€™s hard to quantify how many alerts are ignored. One US study by the Cloud Security Alliance found that 31.9% of security professionals surveyed ignored alerts because there are so many false positives. Local data is unavailable, butย weย knowย thatย manyย alerts are passing by unnoticed.โ€ฏย 

Most cyber security vendors offer the following answers to this barrage of alerts:ย 

  • Orchestrationย โ€“ this essentially automates the response to alerts. Most basic and even some more complex responsesย to alertsย can be automated with orchestration tools that integrate the various security tools.ย However this automation normally takes significant effort and the ROI is not always clearly visible when looking at the priceย tag of the tools and effort required from aย DevOps perspective.ย 
  • Artificial Intelligence and Machine Learningย โ€“ The idea here is that the alerts are more refined and better at detecting malicious activity given the large volume of data. However, this often exacerbates the problem by actually just creatingย alerts on top of the alerts already generated by the traditional methods.ย 

ย 

Maybe the answer is simpler.ย Weย can improve our detectorsโ€™ย performance and discard ineffective detectors. This doesnโ€™t sound as exciting as AI and Orchestration, but we have seenย firsthandย how effective it can be. Before we are able to improveย the performance of detectors, we need a means to measure their current effectiveness.ย ย 

I suggest the followingย four keyย attributesย that could allow us to measure our detectors:ย 

#1 Simplicityย 

Keepย detector parametersย as simple as possible – complexity doesnโ€™t always improve alerts (in many cases it has the opposite effect). As soon asย detectorsย are overly complex, they can be difficult to investigate, become expensive or just break entirely. Thereย is almost alwaysย multiple ways to detectย a particular type ofย malicious behaviour,ย ย andย itย may beย a good idea to create a range of differentย detectorsย toย identify that behaviour, but measuring each detectors simplicity will allow you to prioritise the simple detectors. For exampleย to detect password spray attempts, we could tap into network traffic, apply some sort of AI or ML to the captured traffic and look for outliers that wouldย indicate this type of malicious activity.ย Alternativelyย and more simply, we could enable a specific audit level on Active Directory, and look for a flood of authentication failures from different user names. Both detectors would work but the latter,ย simpler approach would beย my preferred method.ย 

#2 Certaintyย 

Certainty relates to how likely it is that aย detectorย representsโ€ฏactual malicious behaviour. Oftentimes,ย detectorsย willย pick upย anomalies that are not actuallyโ€ฏmalicious, but that still requireย further manual investigation.ย If it is malicious,ย further details about the incident have to beย determined.ย  This manual investigation is not always a bad thing but it should be measured as a metric of theย detector. If it is not producingย consistently accurate alerts it has to beย tuned.ย 

#3 Resilienceย 

Can yourย detectorsย work in adaptive conditions? Organisations’ constantly changing IT landscapes require that cyber defences are adaptable.ย These changes are often referred to as โ€œenvironmental driftโ€,ย where previouslyย optimally-running processes suddenly underperform or stop working entirely.ย Keeping things simple certainly aids resilience, but there are other variables to consider. Attackers may be aware of common detection methods and will attempt toย execute their attack without violating these rules,ย soย how does the rule stand up to these evasiveย manoeuvres? Again this is where simplicity may be on yourย side. Using the same example from above, the simpler detector which only relies on AD security events would be much more resilient than the detector that requires network taps, AI and complex analysis.ย 

#4 Relevanceย 

This is one of the more difficult things to measure; relevance is often subjective to the organisation. Relevance also depends on multiple factors: How new or old is the attackย theย detectorย is designed toย uncover?ย Is the attack relevant to the organisation? For example, if aย detectorย is designed toย identifyย an attack specifically againstย theย Solarisย Operating system and an organisation does not have a single Solaris system,ย there is probably only limited valueย in that detector.ย  The reason itโ€™s important to measure relevanceย is that we need to ensure our defences can detect attacks that are happening today and that may be successful in our environment.โ€ฏย 

Organisational complexity and the growing sophistication of cyber attackers makes the job of the cybersecurity professional all the more difficult.ย Measuring detectors against these four attributes provides a useful starting point to start measuring their effectiveness.ย Some detectors may not score highly on all four attributes, and thatโ€™s fine. Knowing a detectorย is highly resilientย and its underlying rules are not overly complexย gives cybersecurity professionalsย clarity on how that detector can be used, and how it should be measured.ย 

And that clarity canย greatly improve the signal-to-noise ratio, reduce alert fatigue, and deliver greaterย efficiency inย an organisationโ€™s overall cybersecurityย efforts.ย ย 

Need to Mitigate a Cyber Risk?