The CyberWire Daily Podcast 4.27.21
Ep 1320 | 4.27.21

The FBI and CISA take a look at the SVR, and offer advice for potential targets. Openness and information warfare. OPSEC and privacy. Babuk hits DC police. Social engineering notes.

Transcript

Dave Bittner: The FBI and CISA detail SVR cyberactivities. Nine U.S. Combatant Commands see classification as an important tool in information warfare. A convergence of OPSEC and privacy. Apple fixes a significant Gatekeeper bypass flaw. Babuk ransomware hits D.C. police. A new twist in credential harvesting. Ben Yelin considers the FTC's stance on racially biased algorithms. Our guest, Tony Howlett from SecureLink, tracks the evolution of threat hunting. And that was no hack; it was just a careless tweet.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, April 27, 2021. 

Dave Bittner: The U.S. FBI and CISA, the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, have released a joint description of trends in SVR cyberactivities, summarizing the current state of the Russian Foreign Intelligence Service's operations against the U.S. and allied networks it targets. In 2018, like everyone else, the SVR decided the future was in the cloud, and it's been operating against targets there ever since. 

Dave Bittner: The service makes heavy use of false identities and cryptocurrencies in putting its campaign infrastructure in place. Quote, "these false identities are usually supported by low-reputation infrastructure, including temporary email accounts and temporary Voice Over IP telephone numbers," end quote. The SVR also uses open source or commercial tools, notably Mimikatz and Cobalt Strike, in its operations. 

Dave Bittner: There are perhaps confusing elements to the report, especially in its allusions to the threat actor's presumptive organization chart and its track record. Not everything mentioned in the track record, for example, flowed through into the SolarWinds supply chain compromise effort. 

Dave Bittner: But the specific recommendations in the document are worth thinking about. The problem with supply chain compromises is the way in which they can turn trusted resources against targeted organizations. The Bureau and CISA recommend auditing log files to identify attempts to access privileged certificates and creation of fake identity providers, deploying software to identify suspicious behavior on systems, including the execution of encoded PowerShell, deploying endpoint protection systems with the ability to monitor for behavioral indicators of compromise, using available public resources to identify credential abuse within cloud environments and, finally, configuring authentication mechanisms to confirm certain user activities on systems, including registering new devices. 

Dave Bittner: There's a sense communicated in a memo to the Office of the Director of National Intelligence from nine of the 11 U.S. Combatant Commanders - U.S. Central Command and U.S. Cyber Command didn't sign - that more declassification would render important assistance to U.S. efforts to counter hostile information campaigns. These are often, though not exclusively, disinformation efforts, and the memo is thought to express concern that the U.S. is losing an information war and that excessive secrecy and overclassification are an important reason why. 

Dave Bittner: Politico, which says it's seen a copy of the memo, quotes it in part as saying, "we request this help to better enable the U.S. and, by extension, its allies and partners to win without fighting, to fight now in so-called gray zones and to supply ammunition in the ongoing war of narrative. Unfortunately, we continue to miss opportunities to clarify truth, counter distortions, puncture false narratives and influence events in time to make a difference," end quote. 

Dave Bittner: The 11 U.S. Combatant Commands, which succeeded the old unified and specified commands some of the grayheads in our audience may remember, are major joint commands that have either a regional or a functional focus. The regional commands are Africa Command, Central Command - covering the Middle East - European Command, Indo-Pacific Command, Northern Command - covering North America - Southern Command - focused on South America. The functional commands are Cyber Command, Transportation Command, Special Operations Command, Space Command and Strategic Command. 

Dave Bittner: The Wall Street Journal describes the way in which commercially collected and sold smartphone geolocation data are coming to be recognized as a serious OPSEC problem. It's a case in which the interests of operations security and privacy would appear to coincide. 

Dave Bittner: The U.S. Department of Defense has sought to crack down on the ways in which its personnel interact with the internet, but much personal data, especially geolocation information, is so pervasively collected that such measures have had, at best, debatable success. Policymakers are looking at the problem, and it seems possible that such concerns may add impetus to congressional privacy legislation. 

Dave Bittner: Apple yesterday fixed a vulnerability in its Gatekeeper Notarization process, The Record and others report. The flaw, TechCrunch says, had been quietly exploited in the wild since January to distribute the Shlayer Trojan. 

Dave Bittner: Researcher Cedric Owens, who discovered and reported the Gatekeeper bypass bug, described the technique as one in which, quote, "a script is placed in the Contents/MacOS/ directory instead of a macho." Since scripts aren't checked by Gatekeeper, this is a way in which malware can falsely present itself to the system as notarized - that is, checked and verified as trusted. Researcher Patrick Wardle confirmed Owens' conclusions. Apple, as we said, fixed the problem Monday. 

Dave Bittner: The Babuk ransomware gang has hit the Washington, D.C., Metropolitan Police, StateScoop reports, and it's threatened to release 250 gigabytes of sensitive files. The Record has screenshots of the dumpsite. 

Dave Bittner: The attack represents a bit of a departure for the Babuk gang, which hitherto hasn't shown signs of making it a practice to hit local government organizations. But, of course, that's a matter of taste - a little bit - habit - a little more - and, above all, a judgment about potential return on investment. Babuk, like other criminal groups, will go where its cost-benefit analysis takes it. In this case, they were drawn to the D.C. police. 

Dave Bittner: Security firm Avanan has noticed an interesting twist on a familiar social engineering gambit. The crooks ask the victims why they, the crooks, paid them - that is, the victims. Are you trying to scam us, victims? Of course, they ask you to log in to your PayPal account to help them track down the error. And then, of course, they'll harvest your credentials. 

Dave Bittner: So it's a bit different, but the grammar and usage in the come-on are pretty bad. Following conventional usage is important. If the message can't get it right with a relatively convincing appearance of native-speaking proficiency, it's best ignored. 

Dave Bittner: And finally, hey, everybody, here's a tip - Twitter is not a search engine. Somebody at U.S. Special Operations Command apparently mistook it for one or maybe just had a confusing number of windows open or was in a coffee-deprived performative state or something like that. That somebody, whoever it may have been, tweeted out a baffling Afghanistan, Islamic State on Saturday. 

Dave Bittner: In truth, we all make mistakes, even U.S. Special Operations Command, which Task & Purpose points out didn't have its social media accounts hacked, as it initially believed and said they had been. It was just operator headspace that induced the Twitter mishap. Quote, "after review, it was determined our Twitter account was not hacked and a social media administrator inadvertently tweeted the words while conducting a search for current topical events. We are reviewing our internal processes to refine our social media practices. No security breach took place, and we apologize for any confusion this may have caused," end quote. 

Dave Bittner: So, as you were, everybody. 

Dave Bittner: The SolarWinds supply chain attack has resulted in many organizations taking a closer look at their efforts when it comes to threat hunting, with many advocating it become a standard ingredient in the security cocktail. Tony Howlett is CISO at SecureLink, and I checked in with him for his thoughts on threat hunting. 

Tony Howlett: So, yeah, threat hunting is a pretty new discipline. It hasn't really been even considered a discipline until recently and someone gave it a name. Honestly, I wish I'd gotten to name it 'cause I don't think threat hunting accurately describes it. It sounds more like you're searching for something out there in the wild, when really what threat hunting describes is searching for threats within your network and your systems either that have happened in the past or may be ongoing where you have a intruder. 

Tony Howlett: And, you know, in the past, again, this was done informally by the system administrators or maybe a security person combing through logs, often kind of in the name of forensics after something bad happened. 

Tony Howlett: But what we're trying to do with threat hunting is, let's do this before something bad happens. So maybe the intruder has just gotten in and not really escalated privileges yet, not stolen anything or accessed anything sensitive. Or, you know, in the case of one that's in the past, we can hopefully find out what they did and take actions before your name appears in the press, you know, like it did with SolarWinds and some of those victims where they found out from either the FBI or the news. That's never a good thing. 

Tony Howlett: So the idea there is to catch the threat while it's happening after your defenses have been breached or maybe perhaps deal with a threat in the past before it becomes public or before it can really damage you. 

Dave Bittner: If you had the opportunity to rename it, what name would you choose? 

Tony Howlett: Oh, you know, I should've thought of that before I put that forth. 

Dave Bittner: (Laughter) It's OK. I didn't mean to put you on the spot. 

Tony Howlett: No, it's a good thing to think about. Gosh, you know, perhaps indications-of-breach hunting, which is not a very - it doesn't roll off the tongue. But that's really what we're looking for is indications of compromise, or IOCs. Can we find certain things that - sort of the breadcrumbs or the trails that the thief left behind? Oh, he left the bottle of milk on the counter would be the obvious one or, more accurately, oh, there's some footprints in my carpet that don't look like my shoes and things like this. Almost all attackers, even the best ones, might leave some trails behind, and that's what we look for when we're doing threat hunting. 

Dave Bittner: Is it fair to say that in the aftermath of the SolarWinds breach that there's been a renewed spotlight put on threat hunting? 

Tony Howlett: Yes, big time. Again, this is kind of a sleepy discipline, I think, only practiced at, you know, really large enterprises that could afford the fees and people who are really doing, you know, all the stages of, you know, the highest-end evolution of an information security program. But now - you had 18,000 customers in SolarWinds that downloaded that infected patch. Not all those are going to be infected or attacked, but it's potentially possible. So if I'm one of those 18,000, I'm definitely doing it. 

Tony Howlett: But it doesn't stop there - right? - because there were some very large vendors, like Microsoft, Cisco, Time Warner, who have seen indications of compromise. And if you're a customer of theirs - most of us are - if you (inaudible) those three, then it's possible that they got compromised, and therefore they might be trying to compromise you. So it's not just that you're a customer of SolarWinds. Are your vendors or your third parties that service you - were they a customer? 

Dave Bittner: Where do things stand in terms of people being able to engage with threat hunting. Like, what is the spectrum of dialing that in? Can you - obviously, you can have in-house folks. But, I mean, is it possible to get threat hunting as a service, if you will? 

Tony Howlett: Absolutely, yeah. It is being more formalized. And there are offerings from a lot of the major consulting companies. There are specialty companies set up just to do this. There is technology, the whole spectrum - right? - from small, you know, single-person specialty consultants up to large - you know, your KPMG and so forth. And obviously, the price range going up from there, but what you get, also. So, yes, everything from someone deciding - an organization deciding, let's task one of our people or a group with doing this internally all the way up to engaging a big, you know, consulting company and spending, you know, multiple - six figures or more to have it done. 

Dave Bittner: And what is your advice for organizations to calibrate this, to know, you know, how much of their resources should they target towards threat hunting? 

Tony Howlett: Interesting question. So again, you're going to look at what is your risk factor. If you're one of those 18,000 customers of SolarWinds, you should definitely be doing it, no question. And then on down the road, a lot of organizations, including ours, did sort of a SolarWinds risk assessment. Let's roll through the known victims of this and our vendors and see if they use the product and so on. That could also - would indicate, let's do this. 

Tony Howlett: And I think we're evolving towards this just becoming just part of the regimen. It's another layer of defense, if you will. You've got your outer walls, your inner bailey, and these are sort of like the tripwires inside your castle to know if they've actually gotten inside. 

Tony Howlett: So I think at some point, most larger, medium-larger organizations are going to have this as sort of that full circle and doing it on a regular basis, so forth. We're not there yet. It's starting to happen. I think this was an eye-opening event. But the value is going to become clearer as organizations start to say, you know, we need - if we haven't been hacked, we probably will be, and we need to assume breach. And therefore - you know, hopefully we don't find anything, but if we do find something, then we can act on it. 

Dave Bittner: Yeah, it kind of reminds me of, you know, if you bring someone in to look in your house for, say, termites or something like that, get somebody poking around in the walls. It's better to find it early on than when your back deck collapses. 

Tony Howlett: That's a very good analogy. Thank you. I'll probably use that. 

Dave Bittner: (Laughter). 

Tony Howlett: But if you find a couple bugs and they've eaten through a board, hey, you treat it. You know, you deal with it. But, yeah, if suddenly sawdust starts falling from your roof, probably well into the infection, and it's going to be very expensive. 

Tony Howlett: And, you know, most companies at this point have experienced some sort of incident, right? Maybe it's just a little infection that they've dealt with immediately. And if they haven't, they probably have and just don't realize it or it's going to happen. 

Tony Howlett: So, you know, this posture, this activity will give you that, hey, I don't have termites this year. And again, you don't - you can't just test for termites this year and say, hey, my house is good for 30 years. You really want them coming back, you know, on a regular basis because if 10 years in if you get it, it's no good. 

Dave Bittner: Right. Where do you think we're heading with threat hunting in terms of - do you think it's going to be more integrated into the standard suite of tools? What's on the horizon? 

Tony Howlett: Yeah, I think so. I think what we're seeing right now is an evolution away from this sort of geeky practitioner in a dark room who's poring through the logs, you know, almost like a "Beautiful Mind" kind of person who can put all these things together. That's really beyond all but maybe "Rain Man" at this point, being able to look through these gigabytes of logs. 

Tony Howlett: So we're leveraging AI and ML, machine learning, to correlate things that the human mind can't see. We're working together more across organizations - you know, the ISACs, the information-sharing groups, 'cause we're stronger together than we are apart. And you are seeing some vendors offer, mostly right now standalone 'cause it is kind of a - it's a slightly different operation than running a firewall and things like this. And any one device isn't going to have the data - all the data you need. It's the idea of aggregating log sources and connecting the dots. 

Tony Howlett: But I think you will see, especially some of the larger vendors who have a whole suite of products, maybe a Cisco or folks like that, who can bring an integrated suite - you know, what's the value of that? It's going to be a marketing term for a while, maybe. So people might want to put that on their product brochure and charge one for it. What's the value? You still want a person involved. Even with the AI, ML and all of those things, you've got to have someone for coordinating the whole process, I think. 

Dave Bittner: That's Tony Howlett from SecureLink. 

Dave Bittner: And joining me once again is Ben Yelin. He's from the University of Maryland Center for Health and Homeland Security but also my co-host over on the "Caveat" podcast. Hello, Ben. 

Ben Yelin: Hello, Dave. 

Dave Bittner: I was drawn - my attention was drawn to a publication that the FTC put out, and it was brought to my attention via a tweet by a gentleman named Ryan Calo, who is a law professor. He's @rcalo on Twitter. And his tweet said, whoa, whoa, WHOA - all caps - an official FTC blog by a staff attorney... 

Ben Yelin: That's how you know it must be serious. 

Dave Bittner: Yeah, yeah - by a staff attorney, noting that the FTC Act prohibits unfair or deceptive practices. That would include the sale or use of, for example, racially biased algorithms. This is an interesting publication from the FTC, Ben. And I wanted to check in with you on what do you think this means in terms of the FTC signaling how they're going to approach people's use of AI? 

Ben Yelin: I think, as you said on our "Caveat" podcast, this is a shot across the bow to an industry to warn them that enforcement is coming on this question. So, you know, this is certainly not something I think we would've seen from the previous presidential administration. I think it reflects a change in policy and change in enforcement practices from the FTC. 

Ben Yelin: So basically, what they're saying here is we have enough evidence now, based on recent studies, to know that many seemingly benign algorithms are leading to discriminatory outcomes where certain people are being denied access or, you know, are suffering other benefits on the basis of their race, nationality, et cetera, because of these inherently biased algorithms. 

Ben Yelin: And what the FTC is saying is not only could you suffer reputational loss or, you know, potentially make your customer base angry, you might face enforcement actions. So you might face civil or criminal fines or some other civil or criminal sanctions. 

Ben Yelin: And they have the authority to issue those sanctions. They cite the FTC Act itself, which prohibits unfair or deceptive practices. And according to this blog post they posted on their website, that includes the sale or use of racially biased algorithms. 

Ben Yelin: And then things like the Fair Credit Reporting Act and Equal Credit Opportunity Act, where if your algorithm leads to some sort of discriminatory outcome where people of a particular race are less likely to qualify for credit, then you are going to be eligible for sanctions from the FTC for unfair trade practices. So I think this is really a groundbreaking post that we saw from the FTC and a real warning to the industry that they are intending to take racially biased algorithms seriously. 

Dave Bittner: Yeah. Yeah, it's interesting. I mean, you look at some of the titles of these paragraphs in this publication. It says, you know, don't exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Tell the truth about how you use data. Do more good than harm. That one seems pretty straightforward. 

Dave Bittner: But they point out that you could end up with basically the equivalent of, you know, digital redlining, you know, which was the old thing back in the - I suppose the '60s was when it was really a thing where, you know, neighborhoods would kind of carve out based on race who could... 

Ben Yelin: Absolutely. 

Dave Bittner: ...Live there or not. They're saying that could be an unintentional consequence of the way some of these algorithms work. And if your algorithm is doing that, the FTC could come after you. 

Ben Yelin: Yeah, and one thing that's important to note here is it does not require discriminatory intent. One of the things they're saying here is that it's up to companies to watch out for discriminatory outcomes. So even if you have the most benign intent possible - and, you know, many companies do - they talked about research done at - research presented at a conference back in 2020 showing that algorithms developed for purposes like health care resource allocation and advertising actually ended up being racially biased. So it is your responsibility as a company to evaluate whether your algorithm leads to discriminatory outcomes, even if you obviously had no intention of being discriminatory. 

Dave Bittner: Well, it'll be interesting to see where this goes. Again, this is over on the FTC's website. It's titled "Aiming for Truth, Fairness and Equity In Your Company's Use of AI." It's written by Elisa Jillson, who is, I believe, an attorney at the FTC. Interesting stuff. Ben Yelin, thanks for joining us and helping make this clear. 

Ben Yelin: Absolutely. Thank you, Dave. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.