The CyberWire Daily Podcast 7.20.20
Ep 1133 | 7.20.20
Following the spoor of the Twitter hackers, a couple of whom seem to be talking to the press. Marketing databases and intelligence collection. TikTok ban? Hacking biomedical research.
Transcript

Dave Bittner: Hey, everybody. It's Dave. And I've got another exciting announcement for you. We've asked a select group of experienced cybersecurity experts to join us and share their unique experiences and perspectives on various topics and concepts in the industry. We're calling this group the CyberWire Hash Table, and you'll hear from these amazing minds on shows like "CSO Perspectives" and the CyberWire Daily Podcast, along with our own CSO, chief analyst and senior fellow, Rick Howard. Learn more about the Hash Table members at thecyberwire.com/hashtable. That's thecyberwire.com/hashtable. Thanks.

Dave Bittner: Notes on last week's Twitter hack and on the allure of original gangster and other celebrity usernames. Using marketing databases for intelligence collection. The U.S. government mulls a ban on TikTok. Johannes Ullrich on Google Cloud Storage becoming a more popular phishing platform. Our own Rick Howard on security operations centers and a preview of the latest episode of his "CSO Perspectives" podcast. And more reaction to alleged Russian and Chinese attempts to hack COVID-19 biomedical research. 

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, July 20, 2020. 

Dave Bittner: Last week's Twitter hack remains under investigation. Some personal data were taken during last week's Twitter hack, according to The Wall Street Journal. The hackers were able to change the passwords on 45 of the accounts they compromised, which, of course, opened the possibility that they may have been able to access users' information. Up to eight of the 130 accounts affected are known to have suffered loss of personal information. 

Dave Bittner: No one has so far fully and explicitly connected the hackers of those behind the Twitter hack with natural persons. The New York Times followed the incident from chatter on Discord and concluded that the hack was the work of three people, probably young, at least two of whom shared an interest in collecting interesting Twitter accounts. 

Dave Bittner: Two of them, one called "ever so anxious" and the other "lol," appear to have been involved in Bitcoin scams before. Both were also well-known regulars on ogusers.com, a site frequented by those interested in acquiring short, so-called original gangster usernames. OG names are regarded as having special cachet because they're normally associated with early adopters of a new platform. The other sort of username that's interesting to what The Wall Street Journal calls a subculture is, of course, the celebrity username. 

Dave Bittner: But neither "ever so anxious" nor "lol" was the original hacker. The apparent originator of the hack, one "Kirk," contacted "lol" with the message, yoo, bro. I work at Twitter. Don't show this to anyone, seriously. What he shared was a demonstration of his ability to take control of coveted Twitter accounts. He enlisted "lol" and "ever so anxious" as middlemen to sell hijacked accounts. "Kirk" is thought to have obtained access to a Twitter Slack channel, where, Mashable explains, he found credentials posted. The hackers progressed to a celebrity Bitcoin scam. 

Dave Bittner: How he got that far is unclear. Twitter hasn't elaborated beyond saying Saturday, quote, "the attackers successfully manipulated a small number of employees and used their credentials to access Twitter's internal systems, including getting through our two-factor protections," end quote. 

Dave Bittner: So apologies are apparently due. "PlugWalkJoe," whom KrebsOnSecurity identified as the moving intelligence behind last week's Twitter hack - his involvement was tangential. He was a customer. He acquired the Twitter account @6 from one of the hackers, "ever so anxious." But that, The New York Times concluded, was the extent of his involvement. 

Dave Bittner: As we mentioned before, there's been no report of "Kirk's" (ph) being identified as a natural person. "lol" said he eventually came to believe that "Kirk" wasn't, in fact, a Twitter employee on the circumstantial grounds that he seemed more eager to do the company harm than "lol" thought a real employee would. Make of that what you will, since plenty of employees - a minority to be sure, but a nontrivial minority - do seem as a matter of history to have been willing to do their company harm. But in truth, very little is known about "Kirk." He was an unknown on the various chat sites he engaged. He came out of nowhere, and then he vanished back into the virtual beyond. 

Dave Bittner: Researchers at Mississippi State University have shown the relative ease with which devices can be geospatially tracked through common, commercially available databases, The Wall Street Journal reports. The study is interesting because of the devices it chose to track - Russian cellphones in and around Moscow and a missile test site in northern Russia, where there'd been some indications that an accident had occurred. The results indicate, the Journal says, the value such open, commercial marketing tools, really, can have for intelligence collection. 

Dave Bittner: The U.S. government seems to be moving toward serious consideration of banning TikTok as a security risk. An op-ed in The Hill suggests that such a ban would be based more on the generally frosty bilateral relations between the U.S. and China than on specific cases of misconduct on the part of the social platform. But on the other hand, TikTok does collect a great deal of data on its users. 

Dave Bittner: The Washington Post collects expert opinions about Russian and Chinese hacking of COVID-19 vaccine research and finds they differ over how to respond and even whether the hacking represented legitimate intelligence collection or a clear violation of international norms. Norms or no norms, there's a significant amount of bipartisan animus directed toward recent incidents of biomedical research hacking. The BBC reports that the Russian ambassador to London says Russia didn't do it, so there you have it. 

Dave Bittner: Joining me once again is Rick Howard. He is the CyberWire's chief security officer and chief analyst. Rick, you are kicking off a new season of your "CSO Perspectives" show that is over on CyberWire Pro, and you're starting off this season with an exploration of SOCs. 

Rick Howard: Yeah, security operations center. I built many of them, toured millions of them. And, you know, I thought that I knew the history of SOC evolution. And as I was digging into this, I discovered that I was completely wrong. I mean, yeah, it turns out that operations centers - the idea of them, that you might need them - they've been around since, like, 5000 B.C. Can you believe that - 5000 B.C.? 

Dave Bittner: Really? 

Rick Howard: Yeah. And we started to see the basic edges of the security - the modern-day security operations center, you know, in the early 1900s as the telecommunications industry started managing these giant networks of telephones. And then we saw the first real operations center to do it in the early '60s. And so that's pretty exciting. But through the next 30, 40 years, we get this evolutionary change from not only the telecoms, but from the intelligence community, from the government, from the commercial sector. And all these folks, all these groups are sort of taking hits at how do you build these things. 

Dave Bittner: I always think of the communications center in the movie "WarGames." 

Rick Howard: True. That's all - every SOC I've ever been in, we were trying to build that operations center... 

Dave Bittner: Right (laughter). 

Rick Howard: ...To some extent - OK... 

Dave Bittner: Right, right. 

Rick Howard: ...Even though it's two guys and a dog next to the coffee pot, OK? 

Dave Bittner: Right. 

Rick Howard: You know, but I was - as I was looking into this, though, I - we discovered that the evolution of SOCs have really stagnated. Like, since the early 2000s, they haven't really changed that much. And I was talking to Helen Patton - she's the Ohio State University CISO - about this kind of lack of momentum and also about how she is managing the "zero trust" policies from the SOC. And she had this to say. 

Helen Patton: The other challenge about research, which people sort of forget about, in the private sector is depending on where you are in the research cycle, your confidentiality requirements change. 

Rick Howard: Yeah. 

Helen Patton: So for example, in the beginning, when you've just got an idea, you want everyone to know about your idea because you want to crowdsource ideas and you want to get best thinking and you want to attract people to your cause, and so it's all public and it's great right up to the point where you've got a patent, and then you don't want anyone to know. And, you know, now it's locked down tighter than a drum. And then once you publish, now you want it to be all open again because you need people to come in and validate that your research is good and all this kind of stuff, right? And we haven't built zero trust protocols or access and authorization protocols around a changing life cycle requirement. 

Rick Howard: So she's basically saying that our concept of zero trust is not really mature enough to handle dynamic access rights, and this is something I've never even considered. You know, when I think about zero trust, I'm thinking, you know, we want to limit the marketing department from getting to the financial database, and, you know, that's good enough. 

Dave Bittner: Right. 

Rick Howard: But what Helen is talking about is she's got a group of individuals, researchers at her university, doing COVID-19 research that has varying degrees of requirements for access rights depending on where they are in the process. And the zero trust platforms that we all use today just aren't strong enough or mature enough to handle that. 

Dave Bittner: I mean, is it fair to say that there's been sort of this push and pull, this tension between what's needed and what's possible throughout the history of SOCs themselves? 

Rick Howard: Absolutely. All right? Then, you know, we've always wanted more in the SOCs and, by the way, have never gotten it, OK? I think, in my mind, what I would really want in my own security operations center is security operations, network operations, physical security all in one spot, with the authority to make decisions to counteract some bad thing that's happening. Nobody that I know of has a SOC like that, and I really do think it's the way it should be. 

Dave Bittner: Yeah. All right, well, there's much more where that came from. Do check out Rick Howard's "CSO Perspectives" podcast. That is part of CyberWire Pro. You can check it out over on our website, thecyberwire.com.  

Dave Bittner: And I'm pleased to be joined once again by Johannes Ullrich. He is the dean of research at the SANS Technology Institute and also the host of the ISC "StormCast" podcast. Johannes, it's always great to have you back. You have been tracking some stuff that's been going on within Google Cloud and some folks using it for phishing. What's going on here? 

Johannes Ullrich: Yeah, it's something that you have seen sort of pick up beginning of the year. And by now, pretty much all phishing attempts that I am receiving that sort of matter, that make it past my spam filter and such, they use Google Cloud Storage to actually store the phishing page. Now, we have seen this in the past with some of the Microsoft cloud and such, but I guess they have gotten better in preventing this and cleaning this up. Turns out that Google's Storage API, those pages are quite persistent. And so a little bit of what you can do - you can basically just have a static page there, but then they just add some JavaScript that will forward the data that the user submits to whatever actual sort of data collection website the attacker has set up. 

Dave Bittner: So why Google? What's causing them to choose this? 

Johannes Ullrich: Well, I think there are a couple reasons. Now, first of all, Google is - ultimately is a trusted site. The URL - the hostname they're using is storage.googleapis.com. Like with all these cloud providers, there's a lot of necessary good stuff on this hostname, so you can't really blacklist it. 

Johannes Ullrich: On the other hand, I found that Google is quite slow in removing these phishing sites, and that may also sort of contribute a little bit to Google becoming more popular and some of the other providers becoming less popular because, well, now the attacker has more time to collect data. Because the page they have to sort of protect is the page user gets to first, and that turns out to be here this storage.googleapis.com. If their collection site gets taken down, they can just make a change to the JavaScript collection page, but users that received the email in the past that actually triggers them to go to phishing site, they'll still end up on that phishing page. So this is sort of the part that attackers usually have to keep up the longest. And if they can keep that up for a week, that's usually all they need to collect all the credentials that they would get out of a particular phishing run. 

Dave Bittner: And does that initial page - does it - I mean, at first glance, does it seem benign? Is it the sort of thing where you could understand how a surface inspection by Google, for example, would not raise suspicions? 

Johannes Ullrich: Well, actually, it usually is just copy-pasted code from the particular page that they're trying to impersonate. So some simple signature-based matching or so may actually capture a lot of these pages. And once the user is done with the page, they usually will redirect them, like, to the user's domain, kind of trying to fool the user into believing that they just entered the wrong credentials. And, of course, they may then try again. 

Dave Bittner: I see. So what are your recommendations here for folks to protect themselves? 

Johannes Ullrich: This is something very - you probably have to rely on user education. I would still recommend report these pages to Google as much as possible, like Google Chrome has sort of a little add-on that makes it really easy to report phishing sites. I hope that Google will eventually get better in cleaning up these pages as people report them. 

Dave Bittner: All right. Well, Johannes Ullrich, thanks for joining us. 

Johannes Ullrich: Thank you. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. And for professionals and cybersecurity leaders who want to stay abreast of this rapidly evolving field, sign up for CyberWire Pro. It'll save you time and keep you informed. Listen for us on your Alexa smart speaker, too. 

Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Puru Prakash, Stefan Vaziri, Kelsea Bond, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.