Threat Intelligence: Use Cases, War Stories, and ROI
February 28, 2017.
By The CyberWire Staff
Ability to collect information can notoriously outstrip the ability to analyze that information into intelligence. And once you have the intelligence, what, exactly, are you supposed to do with it? After all, you haven't developed it merely to gratify curiosity. So what are the use cases?
At RSA we spoke to a number of companies in the business of delivering threat intelligence, and, from their diverse perspectives, they agreed that their solutions augmented the capabilities of human analysts. They paid a great deal of attention to reducing false alarms, and to presenting intelligence in a perspicuous, easily understood interface. They also agreed on the general uses to which enterprises ought to put threat intelligence: it should serve, as ThreatConnect's Toni Gidwani put it, "blocking and tackling." CrowdStrike, famous for its work in attributing attacks, is nonetheless a company to a great extent concerned with advanced endpoint protection. Dan Larson, CrowdStrike's Director of Product Management, explained why they're interested in tracking threat actors: "The value of doing this lies in learning their motivation. That enables you to understand what they're after, and can suggest other targets."
Recorded Future's Levi Gundert agreed with the importance of blocking and tackling, but he stressed to us that threat intelligence is useful strategically, and that the contribution to blocking and tackling has to be useful. "There's not a whole lot of utility in just dumping data into a SIEM or a SOC, and then calling it a day." The real value, he argued, lies in helping an enterprise determine where the actual risk to its business lies. "The value question is front and center for a lot of people now. They're spending money on controls, but they have no idea if that's where they should spend money."
Familiarity with threat actors is important not so much so one can arrive at an attribution—that's a matter of interest to military forces and law enforcement agencies—but rather so an enterprise can gain the sort of insight into the adversary's tactics, techniques, and procedures that will enable them to stay ahead of the threat, protect their important assets, and better orchestrate incident response. Fidelis Cybersecurity's John Bambenek expressed it this way: "Our intelligence goes right into our product." They look for countermeasures to criminal tactics and for indicators of the presence of criminal infrastructure.
Intelligence-sharing among enterprises.
Familiarity with specific threat actors is also valuable because so many of those actors are repeat offenders, and they offend against targets that have a lot more in common than the targets themselves sometimes realize. Criminals reuse code as well as command-and-control infrastructure, TruSTAR CEO Paul Kurtz pointed out to us. "And you, the victim, aren't unique. There's a 65% chance that a given incident correlates with others that have been observed," he went on to add. "A couple of dozen operators create eighty percent of the trouble." Kurtz sees this as an important reason not only to develop threat intelligence, but to share it. If enterprises fail to do so, "we're making their [the criminals'] job easy, and our job harder."
Tracking threat actors in a high-profile case.
Threat actors are commonly divided into three principal categories: criminals, hacktivists, and nation-states. Disentangling these can be difficult, particularly as nation-states coopt organized criminal groups, manipulate or direct hacktivists, and cloak their own direct action in more-or-less plausible deniability. As Kurz noted with gloomy realism, "To think there are more good guys than bad guys on the Internet is essentially a US-centric view of the world. In many other parts of the world, there's no distinction."
The US Department of Homeland Security has released a second report on "Grizzly Steppe," as 2016's campaign against US political targets has come to be called. TruSTAR worked its own analysis on the data disclosed in that report, and it found significant correlation between Grizzly Steppe and the criminal gang Carbanak. This isn't reason to conclude that the Russian government wasn't behind Grizzly Steppe, but it underscores the real possibility of misattribution in cases where criminal and intelligence activities overlap. "Attribution is a muddled mess when these guys start using the same infrastructure," Kurtz observed. The more we exchange data, he argued, the better we'll understand the threat actors.
We spoke with Fidelis Cybersecurity's John Bambenek, who said that the "ad hoc, informal line" between crime and espionage is often difficult to discern. Countries with cybercrime problems tend to see criminals trying to curry favor with law enforcement by working with espionage services.
He talked us through his company's involvement in the investigation of last year's Democratic National Committee (DNC) cyber incident. Once the DNC realized it had a problem, it had its law firm hire CrowdStrike to investigate DNC networks. CrowdStrike concluded that APT 28 (Cozy Bear) and APT 29 (Fancy Bear) were Russian intelligence operations mounted, respectively, by the FSB and the GRU. Fidelis was among the other firms CrowdStrike brought in to check and confirm their findings. The investigators found no equivalent evidence of high-level Russian activity in incidents either affecting the Illinois Board of Elections (they found commodity attack tools), or in attacks against various Republican targets (where they observed commodity phishing).
Bambenek sees little room for doubt that the FSB and GRU were behind the DNC hack. The only ambiguity investigators saw—and there was little enough of that—was to afford the Russian government a bit of plausible deniability. Russian authorities wanted to be able to deny publicly involvement in the operations, but they also wanted their involvement to be clear enough to show other targets, especially targets in the former Soviet Republics of the Near Abroad that Russia held their assets at risk. Thus Bambenek suggests the operation against the DNC was probably more propaganda than election manipulation, "in essence, a demonstration of force."
The lesson Bambenek would have people draw from the DNC's experience is a familiar one: "If you've got something worth stealing, someone's going to steal it." But he also thinks that enterprises should get used to looking for subtle signs of deception.
How WikiLeaks got the DNC documents it released is unknown, but Bambenek thought it reasonable to suspect the Russian services were the ultimate source, probably passing the documents through intermediaries. There's no evidence that the compromised emails were altered, Bambenek told us, "and that kind of surprised me." In some ways this argues a lack of subtlety on the attackers' part—you'd expect alteration and manipulation in Western political dirty tricks campaigns, for example, but there seems to be have been none of that in this case.
Absent the sort of universally adopted and generally observed digital Geneva Convention Microsoft and others called for at RSA ("It's not a bad idea, but how do you do that?" Bambenek commented) the best way forward seems to be improving international cooperation in law enforcement. And once you can arrive at attribution of an attack to a nation-state, then one thinks about a proportional response that can not only punish, but can deter future attacks. In the case of the DNC hacking, it's not entirely clear what such a response might be. Bambenek thought it unlikely (and probably undesirable) that the United States would retaliate by hitting elections in other countries. Financial sanctions seem more likely.
Making threat intelligence usable in fact as well as in principle.
Making effective use of intelligence is a matter of collecting and analyzing it in response to a well-thought out set of questions that address a particular organization's requirements. Simply collecting threat data unsystematically can amount to nothing more than noise—chattering alarms, say—or shiny distractions—a magpie's nest of useless glittering junk.
Companies in the business of delivering threat intelligence are well aware of this. As LookingGlass CTO Allan Thomson noted, "We buy all this intelligence, a customer said, but what's the value?" He thinks the value comes from the sources of the data, how the data are refined, and how the information derived from them is used in the enterprise. "We track three to five thousand actors in the deep, dark web," Thomson said. Single indicators are insufficient—an IP address without context tells you little, for example. He claimed that LookingGlass and its Threat Gateway enjoy high rates of accuracy, sinkholing command-and-control servers in addition to tracking dark web activity. They use machine automation to collect and refine data, and then bring in the human analysts to call the gray areas.
You need to focus on your vulnerabilities, and what can threaten those, and use that self-understanding to shape the intelligence product. LookingGlass seeks to give its customers visibility, providing them situational awareness of their public exposure. They score that exposure, and deliver the score along with its supporting evidence in a transparent way.
ThreatQuotient's Jonathan Couch argued that it's important to begin with the customer's actual tactical use cases—"You don't want to be a self-licking ice cream cone for the CISO." In this context attribution can become important for what Couch called "intelligence pivoting." You need to understand the threat actor's overall campaign to understand its scope, and if you can then assign a campaign to a particular actor, you're able to look for other instances of those attacks, and for common tactics, techniques, and procedures.
ThreatQuotient offers an open, modular solution. They automate the analytical workflow to put the human in the loop at the point of decision. This is vital because a machine-to-machine system runs a significant risk of increasing the noise on the network. "There are three undesirable things on networks: noise, nuisance, and threats," Couch said.
TruSTAR's Kurtz observed that intelligence "sharing" has picked up some negative connotations, largely because of the unsystematic ways in which the sharing has been done. "People aren't seeing a return on it." He believes a good user interface with perspicuous representation of data, is vital, as is the implementation of sound workflows. TruSTAR has seen some success in helping its customers integrate intelligence exchange into the response workflow.
ThreatConnect has just released a suite of features structured around automation. They enable customers to build automated workflows around analysis and playbooks. They too offer a platform for sharing intelligence, hosting various communities of interest, and they work with ISACs. According to Toni Gidwani, they can also ingest open-source and (if the customer desires) premium sources of intelligence. ThreatConnect uses a quadripartite "Diamond Model" as their method of intrusion analysis. This provides a framework for looking at an incident in terms of (1) capabilities, (2) infrastructure, (3) adversary, and (4) victim. Their solution provides customers with a single pane of glass on which to see the cyber intelligence picture.
DarkLight, specialists in artificial intelligence, have deployed technology developed at Department of Energy facilities in Richland, Washington. As their CEO John Shearer explained, their goal is to augment human analytical capability. "You can replace a lot of low-level deductive reasoning," he said, but not, of course, the human analysts themselves. He thinks (as do most others in the industry) that there are too many false positives in our defenses, and DarkLight winnows these out by applying human-like analytical processes to alerts, then determining which alerts are worthy of attention by an actual human analyst.
Attribution to a threat actor is one DarkLight use case. The system uses ontologies to identify threat actors by fusing indicators of threats. The system can now ingest other ontologies and apply them to their baseline common knowledge graph, a capability Shearer sees as essential. The system offers an analytic fusion of disparate data sources. Analysts train the AI as they use it.
Recorded Future's Gundert sees threat intelligence as advancing the understanding of operational risk. In general, the industry needs better quantitative risk analysis. "We can have nice checklists for compliance, but that's not risk analysis." The operational use of threat intelligence lies in its ability to inform better controls.
Risk estimation is challenging, Gundert thinks, in part because, while there are plenty of data in our field, those data are neither as wide nor as accessible as they are in other fields. "When you get down to it, people know a lot, but they're not good native estimators." Recorded Future is putting together a large number of data sources to enable better quantitative risk estimation. They believe they're applying machine learning and natural language processing to threat data in a distinctive way that allows them to predict, as Gundert puts it, "indicators that may not be bad today, but that soon will be." They conclude it's a safe bet that ransomware will continue to expand (the criminal economics make sense). We asked them what threat seemed likely to disappear, and Gundert's colleague Allan Liska suggested that Fake AV was on the way out. "It is still around," he said, "but it is far less common than it used to be. The reasons for this are threefold: (1) To some extent, it has been supplanted by ransomware (in fact, some of the FakeAV groups have specifically migrated to ransomware). (2) Banks are getting better at identifying FakeAV transactions and blocking/reversing them. (3) Law enforcement is getting better at working with financial institutions, the security community, and other countries to recover stolen funds and arrest the actors. In other words, the risk of running a FakeAV operation no longer outweigh the potential rewards, and as long as that continues FakeAV will fade completely away."
So here are some points of consensus among the threat intelligence companies we spoke with. One benefit of the threat intelligence solutions all of them offer is the ability to collect, share, transfer, and preserve expertise. They all independently saw this aspect of their services as particularly valuable to customers who inevitably deal with employee turnover and the attendant perishability of institutional knowledge. They also generally agreed upon the need for the user to train the artificial intelligence in their products. And they emphasized that intelligence needs to be integrated into an enterprises processes and workflows.
Note: this article was updated 2.27.17 to incorporate a discussion with Recorded Future.
Correction, 2.28.17: the article as originally posted erroneously said that DarkLight technology was developed in "Richmond, Washington." The location is now correctly given above as "Richland, Washington."