Cyber Threat Intelligence: The Firehose of Noise and How We Got Here
Threat intelligence feeds have become popular, and a company’s ability to track threats outside of its own environment is better than ever. With these improvements though, has come an increasing demand on security professionals to select and manage the right combination of tools to achieve their desired outcomes.
Here is a brief look at the history of the cyber threat intelligence industry, and where we might go from here.
Turning on the Firehose
When organizations first began setting up security operations centers, they typically did so by collecting a patchwork of data from host based monitoring (like anti-virus) applications, logs for client facing services, and network appliances. Though these setups were effective at detecting previously catalogued malware and unusual network activity, it became clear that even moderately sophisticated attackers could avoid detection.
These indicators of compromise (IOCs) were not routinely shared between security vendors or enterprise organizations, so attackers could move from one target to the next without worrying about being recognized. To combat this, access to a stream of the most updated information on host and network IOCs would be needed.
Thus, the threat intelligence feed was born.
Threat intelligence feeds arose from a surge of data that security companies began collecting as monitoring became standard in home and enterprise environments. This supply fits perfectly with the demand for up to date information about IOCs and the rise of large scale search and analysis tools like Splunk and ELK.
The feeds could be integrated into a system that regularly monitored network traffic, files on hosts, and user activity by writing simple code to access feed APIs. The solution to network defense appeared to be straightforward. Based on the belief that more data meant a wider net and thus a better system, CISOs began ingesting feeds and throwing the information into their monitoring software.
The feeds certainly worked as advertised and provided data; a lot of it. Tens of thousands of data points a day, loosely categorized, sometimes inaccurate, and often irrelevant. It wasn’t long before front-line practitioners noticed major problems.
First, even moderately sophisticated attackers adapted to this monitoring regime by developing techniques to evade basic detection mechanisms. To avoid detection based on simple IOCs, it is easy for an attacker to change their server address or modify their code every time they target a new organization.
Simple IOCs such as IP addresses, domains, and file hashes are necessary, but not sufficient. Sophisticated host and network monitoring solutions can perform some level of behavioral monitoring and context-aware detection, but even these methods can be subverted through observation and mimicry.
It also became clear that wrangling the data and making sense of it was more difficult than anticipated. Security teams still spend an excessive amount of time automating threat information feeds and parsing data in order to operationalize information within their security and event management systems.
Some feeds are already unwieldy due to formatting differences, lack of integration, ambiguous relevance, and the time-sensitive nature of the data. Additionally, the wide nets cast by multiple data feeds began to generate more and more false positives, especially when IOCs like server addresses were recycled for legitimate purposes.
Threat hunters had to deal with hundreds of false alarms a day, a task that quickly numbed the senses and made it even easier to overlook real attacks that used new techniques.With a multitude of potential attack vectors and a high rate of false positives, security professionals needed to know how and where to direct their attention.
In order to stay one step ahead of threats, organizations needed to anticipate malevolent behavior and be on the lookout for attacks before they happened. This could only be achieved by understanding their organization in the context of the world in general and the internet in particular.
To fulfill this need, a second generation of threat intelligence feeds arose. Security teams began integrating information about APTs collected by major research and cybersecurity firms.
These data sets required understanding of more than the traditional technical sources. Social media, breach data, external network telemetry, and geodata are all finding a place within the toolset of advanced threat programs. The prevalence of sophisticated attackers and rich data sources also moves intelligence programs into an entirely different realm of importance.
Widening The Spray
While the technical side of the house was dealing with the challenge of making data feeds palatable, strategic planners began to learn that threat intelligence could be used to inform business decisions. Varied threats like disinformation, platform abuse, brand dilution, and strategic breach campaigns became more prevalent.
More and more of these threats lived far outside the traditional environment of analysts investigating potential intrusions on their dashboards. Advanced programs began adding new groups that looked more and more like marketing and business intelligence units.
This posed another difficult problem: who would be responsible for dealing with these existential threats and how could an accurate picture of the world be gleaned from all of this knowledge?
To administer an effective threat management program, plans made at the strategic, tactical, and technical levels would need to work in concert with one another.
How? Let’s look at an example, Third Party Risk Management:
Beyond the primary risks associated with a direct compromise of an organization’s assets, there is also an indirect risk associated with the compromise of a vendor or partner. Conversely, a company may find themselves being targeted by a threat actor whose primary goal is compromising one of their clients, and is only targeting the company as means to achieve their ultimate objective.
In both of these scenarios, the organization is exposed to a risk that can be managed strategically by breaking off the partnership, tactically by segregating interaction from the partner organization, or technically by allocating more assets to the virtual border between the two organizations.
Choosing the best option (or mix of options) requires decision makers to have accurate intelligence on both the technical profile of the threat actor and their ultimate goal. The next generation of threat intelligence products certainly helps, but difficult issues remain unresolved.
Still Looking For a Straw
The current generation of solutions only solves one of the big problems within threat programs. These additional intelligence feeds augment the data that analysts have at their fingertips, potentially allowing the organization to anticipate attacks before they happen.
However, they do not solve the second problem of tractability. If a security program is already drowning in noise, adding more data to the mix is just going to confuse things even more.
If the analysts don’t have experience or training on how to use the multitude of intelligence sources, they won’t be able to draw the insights needed to make the most of their tools.
Next generation feeds with a high level of volume and specificity are only useful if there exists a high level of technical expertise, analytical capability, and situational awareness on the part of the organization purchasing them.
Unfortunately, very few organizations can afford security personnel with that level of expertise. Instead they spend millions of dollars on technical solutions that replace practitioners instead of facilitating them. Human analysis will always be necessary, but it will always be expensive and in short supply.
Replacing the Firehose with a Straw
The challenge, of course, is for the industry to recognize less can be more. Many threat intelligence programs, and thus feed providers, continue to be measured against metrics that align with “how much” rather than true impact.
One solution to this problem is a feed or service which packages, filters, and enriches existing data sources by using analysts familiar with the needs of client organizations. In essence, this provider would manage intelligence by providing a layer of automation and human analysis between data feeds and security teams. This would guarantee that customers are only getting information which is relevant and focused, effectively replacing the firehose with a straw.