Artificial intelligence behaving badly? Or just tastelessly? Third-party risks. Signs that the advantage may be tilting toward the defender.
Dave Bittner: Social engineering with generative AI. MyloBot and BHProxies. PureCrypter is deployed against government organizations and staged through Discord. Dish Network reports disruption. Third-party app and software as a service risk. Further assessments of the cyber phase of Russia's war so far with warnings to stay alert. Are tough times coming in gangland? Comments on NIST's revisions to its cybersecurity framework are due this Friday. AJ Nash from ZeroFox on mis-, dis- and mal-information. Rick Howard digs into zero trust. And - get this - AI is writing science fiction. From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, February 27, 2023.
Social engineering with generative AI.
Dave Bittner: Researchers at SafeGuard Cyber have observed a social engineering campaign on LinkedIn that use the DALL-E generative AI model to make images for phony ads designed to gather personal information. The malicious ads purported to offer a link to a white paper that would empower sales teams with next-level insights and strategies, a spicy come-on you've probably seen a time or two. If a user clicks the ad, they'll be asked to enter their personal information, including their email address and phone number, in order to receive the white paper. SafeGuard Cyber's researchers comment that this information would be useful in preparing future targeted phishing attacks.
Mylobot and BHProxies.
Dave Bittner: Commenting on a report earlier this month from BitSight that described the BHProxies botnet residential proxy service and the actor behind it, Krebs on Security wrote Friday that the goal seems to be a move in the criminal-to-criminal market. BHProxies seems to be linked to a 6-year-old botnet named MyloBot, and its goal seems to be the transformation of the infected system into a proxy. The BHProxies service allows for the rental of residential IP addresses to use as a relay for their internet communications, providing anonymity and the advantage of being perceived as a residential user surfing the web. It's said to deliver access to over 150,000 devices. The MyloBot threat actor, whose first activity was detected in an October 2017 sample by Deep Instinct, has used sophisticated methods of camouflage, lying dormant for a couple of weeks on an infected system before making contact with command-and-control servers and running only in the temporary memory of the infected computer. BitSight researchers say that they cannot prove that BHProxies is linked to MyloBot, but they have a strong suspicion since MyloBot and BHProxies used the exact same IP on an interval of 24 hours.
PureCrypter deployed against government organizations, staged through Discord.
Dave Bittner: Menlo Security is tracking a campaign that's using the commodity downloader PureCrypter to target government entities. The threat actor uses Discord to host the downloader and employs a compromised domain belonging to a nonprofit organization as a command-and-control server. The attackers are using PureCrypter to deliver a variety of malware strains, including the Redline Stealer, Agent Tesla, Eternity, Blackmoon and Philadelphia Ransomware. The researchers conclude that this threat actor doesn't appear to be a major player in the threat landscape, but the targeting of government entities is surely a reason to watch out for them.
Third-party apps and third-party risk.
Dave Bittner: Adaptive Shield's annual SaaS-to-SaaS Access Report, which discusses this year's organizational security risks posited by connected third-party apps, was released this morning. The researchers report that companies with 10,000 SaaS users of Microsoft 365 have, on average, just over 2,000 applications connected to the productivity software. That number jumps to about 6,700 in Google Workspace connections. For companies with 10,000 to 20,000 users of Google Workspace, the average number of connected apps increases to just shy of 14,000. High-risk access to permissions, such as the ability to see, create, edit and delete Google Drive files and M365 data, have been found in 39% of apps connected to Microsoft 365 and 11% to Google Workspace. The apps most commonly connected to such software have been email applications, followed by file and document management apps. Scheduling, content management and project management. Apps also earned a spot on the top ten list. Organizations are advised to look to their policies and look to their training.
Further assessments of the cyber phase of Russia's war so far, with warnings to stay alert.
Dave Bittner: While Russian offensive cyber action against Ukraine has been heavy and marked by the intelligence services' attempts at disruptive attacks, using wipers, for example, it has fallen far short of prewar expectations. Ukrainian resilience has blunted much of the Russians' cyber offensive effects. ESET offers a history of wiper attacks over the course of the war. CyberScoop draws attention to the success of Ukrainian defensive measures, which have certainly minimized the effects of the wipers and other attempts to influence the outcome of the war in cyberspace. The Canadian Centre for Cybersecurity issued a warning Friday calling for a heightened state of vigilance, especially for those in the critical infrastructure sector, and to bolster their awareness of and protection against malicious cyberthreats.
Tough times in gangland?
Dave Bittner: There's another sign that the advantage may have tilted a bit toward the defenders. State-directed and politically motivated threat actors aren't the only ones finding their tasks harder. The Wall Street Journal reports that cyber gangs' proceeds from their crimes have fallen off, and the individual criminals themselves are facing the equivalent of layoffs. Companies, encouraged by more stringent requirements for obtaining cyber insurance, have improved their defenses. And more aggressive law enforcement activity has also taken a direct toll on the gangs. Again, that's not a reason to get complacent, but it does offer some reassurance that the defenders' task isn't a futile one.
Comments on NIST's revisions to its Cybersecurity Framework are due Friday.
Dave Bittner: Proposed changes to U.S. National Institute of Standards and Technology's guidance, found in "NIST Cybersecurity Framework 2.0 Concept Paper: Potential Significant Updates to the Cybersecurity Framework," are open for public comment through this Friday, March 3, 2023. Among other goals, the changes are intended to expand the scope of the framework to organizations of all sizes in all sectors. They also reflect an increased emphasis on international cooperation and a more extensive treatment of cybersecurity as an exercise in risk management. Comments on Framework 2.0 can be emailed to NIST.
Algorithms acting badly.
Dave Bittner: What fresh hell is this? That's what poet Dorothy Parker used to say when she walked into a party. We'll say it again now. The Verge says science fiction magazines are getting a lot of AI-written submissions. Apparently, the editors say they can tell. So at least we got that going for us. These submissions aren't fanfiction, but seriously, can AI-written fanfic be far behind? We fear we won't be spared. Somewhere, some algorithm is churning out saucy versions of Fifty Shades of Jean-Luc Picard or Chewbacca Visits the Valley of the Dolls. Are the artificially intelligent progeny of us allegedly naturally intelligent Homo sapiens destined to make all of our mistakes, only in a more robotic way? We knew this would happen. It comes from hanging out with bad data and bad algorithms, the kind of algorithms you find hanging out on street corners, throwing rocks at cars. It's a shame, but there you have it. So stay in school, friends. By friends, we mean algorithms. And choose your role models with care.
Dave Bittner: Coming up after the break, AJ Nash from ZeroFox on mis-, dis- and mal-information. And Rick Howard digs in to zero trust. Stick around.
Dave Bittner: In a recent report outlining predictions for 2023, AJ Nash from ZeroFox outlined the prevalence of mis-, dis- and mal-information. He makes the case that they may indeed rank at the top of the list of threats that governments and organizations face in the coming year.
Aj Nash: On the Horizon piece, what we're seeing growth in is mis-, dis- and mal-information. And we've seen a pretty good-sized growth in that area, I would say, over the last six or seven years in social media. We've seen it in regular media at this point. And this has been a huge threat that I think is really a growing problem that people need to look at because this is no longer just newsworthy; this is really impacting the lives of everyone that we run into. And I don't know anybody right now, honestly, that isn't impacted by mis-, dis- or mal-information right now.
Dave Bittner: Can we unpack that a little bit just for - because misinformation, disinformation, I think people probably are clear on, but mal-information, I think there's some nuance there. How do you and your colleagues there separate those three different flavors of information?
Aj Nash: Yeah, that's a good question. People confuse these terms all the time. To be honest, I've confused them regularly.
Dave Bittner: (Laughter).
Aj Nash: I'm often having to go back and check my own references because, listen; mis-, dis-, mal-information, they all sort of blend together. And they're misused in media. They're misreferred to. So to be clear, misinformation is false information, but it's not intended to cause harm. So a good example of that would be my aunt tells me, you know, hey, I heard a story. You know, did you hear about this political figure said this or did this - right? - any random thing. She doesn't mean harm. She read it someplace. She believes it's true, and she's passing it along to me. It's misinformation. It's not accurate, but...
Dave Bittner: OK.
Aj Nash: ...It's being passed along unintended. It will still cause harm, but it's unintended to do so. Frankly, when things go viral, that's when we get into a lot of misinformation. Disinformation is false information, but it's intended to manipulate or cause damage. So the original source, for instance, in using the same scenario, my aunt, who's quoting something, perhaps where she got it from was intentional. Somebody actually intended to cause harm. They said something that wasn't true about, let's say, a politician they disliked or a sports figure they disliked or, you know, a media figure - whatever it might be, right? So they're pushing disinformation, but it's now being pushed around, and it's become misinformation along the way. People aren't intending to push it, but they've made it go viral. So there's that subtle difference there.
Aj Nash: The other issue that goes in this is sometimes misinformation was never intended to be - for harm to begin with. It wasn't necessarily disinformation that turned into misinformation. It's just somebody made a mistake. You know, somebody heard something wrong. They misinterpreted something. They said something. And that just spread like wildfire. And we see that a lot. And then people, especially famous people, have to come back and unwrap those. You know, they were misquoted, or somebody said, well, I saw this person fell down in the streets, and they must have a health issue, and it turns out they just tripped on a, you know, crack in the sidewalk.
Dave Bittner: Right (laughter).
Aj Nash: And you've got video to prove it, and you've got to go back and point that out, right? Because it impacts your reputation. Now, mal-information, that's really the most dangerous one because mal-information starts with a grain of truth. There's something in there that was true, and then it's exaggerated in a way that is misleading or causes harm. So mal-information is really difficult. Much like any form of lying, the most effective lie is a lie that's based on something true because somebody might see the truth in it and then not be as scrutinizing of the rest of the comments or the rest of the commentary that follows, which is, in fact, you know, not - it's dishonest. It's not true.
Aj Nash: So these are subtle differences, and they're hard to distinguish. And to be honest, in terms of their impact, they may not need to be distinguished for you or I. You know, if we're getting information to us, it's either accurate or it's not. You know, it's - professionals will go back and try to figure out why it was out there, why it was inaccurate, why it caught fire, went viral and who's responsible. But for you and I, the key piece, really, is we've got to do the fact-checking. We've got to do the research and determine if what we're reading and what we're hearing is true. And then if it's not, if we have the time and energy, we certainly can go back and try to figure out why we were given this false information. But there's just so much out right now. We're all drowning. You know, I pointed out - 147 minutes a day on the internet, it's the average user right now. We're all living in a space of just massive inputs.
Dave Bittner: What are your recommendations on an organizational level for dealing with this? Is this, you know, I don't know, keeping an eye on the Slack channels to see if some of this stuff starts to take off? Or, you know, what are some practical ways folks can stay on top of this?
Aj Nash: Yeah, that's a good question. So - and it's a really tough piece because what a lot of this starts with is we have to agree on what is considered a reliable source. And I think this is where we're running into a lot of challenges is - societally, is people are discrediting sources that were always considered reliable in the past. And so if you don't have a reliable source people agree on, then you end up with this siloing of information, and people choose their sources, and the sources align with what they want, and it just feeds their bias. So I think it's important to have third parties that you trust, that you rely on, that you say these are sources that we trust and believe in. These are unbiased people trying to do good work. And then that's why a lot of times this ends up being put out to third parties. And it takes a lot of vigilance. There has to be an incredible amount of monitoring and observing to understand what is being said and to see the early stages of a misinformation campaign or a disinformation campaign or a mal-information campaign and to get in front of those things. You know, whether it's a social media campaign - I'm looking for somebody that's maligning your brand - and being in a position to stop that.
Aj Nash: The importance of fact-checking - you know, it's really hard, but we really need to do a better job of helping people understand what is considered a valid source and do some fact-checking and looking at several different sources, sources maybe you believe in and sources that maybe you don't believe in so you get a wider picture. And if there's conflict between them, then how are you going to decide? But it's really important to analyze what we're taking in. Take a moment and say, where did this come from? You know, what's the likelihood it's true? If it sounds outrageous - whether it sounds outrageous because you're offended by the thing you've just read or heard, or outrageous because it just sounds so amazingly incredible it couldn't possibly be true because how awesome is this thing? In either case, you should really take a look at the source and start looking through and saying, what's the likelihood this is true? You know, if the politician that I really, really dislike, this horrible story just came out about them, and it's going to be a massive scandal, but I haven't read it any place else or heard it anywhere else, maybe I should do some research. Maybe it's just not true.
Aj Nash: You know, it's like any other scam. If it's too good to be true or too bad to be true, it's certainly worth taking time to research and seeing what's there. You know, it's a hard thing to do, but if we don't take the time to do the fact-checking, if we don't take the time to do the research, if we just live in our own silos and our own bubbles, we're victims, you know? And we're allowing ourselves to be that. We're actually willing victims of that when we're choosing to be the dupe. We're choosing to be the fool because we like what we're hearing. And that's really the biggest danger to me in these mis-, dis- and mal-information campaigns, is there are people who want to take advantage of those who are willing to be taken advantage of if they're told the things they want to hear.
Dave Bittner: That's AJ Nash from ZeroFox.
Dave Bittner: It is always my pleasure to welcome back to the show, Rick Howard. He is the CyberWire's chief security officer and also our chief analyst. Rick, welcome back.
Rick Howard: Hey, Dave.
Dave Bittner: So in our CyberWire programming meeting earlier this week, we were going over all of the published episodes of "CSO Perspectives" starting back in 2020. And one of the first ones you ever did covered the topic of zero trust as a first principles strategy. It's been over three years, Rick. What else you got?
Rick Howard: (Laughter) Ouch. That is so true. Point well taken, sir. OK.
Dave Bittner: (Laughter).
Rick Howard: But I think it's fair to say that zero trust comes up a lot on the network of CyberWire's podcasts and newsletters and not to mention the training side that we get with our brothers and sisters from the CyberVista merger we did last year.
Dave Bittner: Well, you know, most of the stuff I see regarding zero trust is kind of strategic or philosophical. And my sense is we tend to come up a little short when it comes to practical implementations out there in the field. I know you've got your ear to the ground. You hear a lot about pilot projects and things like that but not necessarily always a lot of success stories.
Rick Howard: Yeah, I think that's true. And I think one of the reasons you don't hear a lot of success stories from the field is the fact that zero trust is a strategy. You know, there are a million things you can do to improve your zero-trust architecture. And more importantly, it's kind of a journey with no obvious endpoint.
Dave Bittner: Right. So it's not like you get to the end of your zero-trust project and say, folks, we've solved it (laughter). We can wrap up - we can move on to something else. Zero trust is complete. We'll check off that box.
Rick Howard: Yeah, we're going to move on to curing cancer now. No, we don't get to that point yet.
Dave Bittner: I mean, is there anybody you can think of out there who is having success?
Rick Howard: Well, that was our question here at "CSO Perspectives." For the past year, the interns down in the underwater sanctum sanctorum have been scouring the landscape to find those stories. And they found a great one. We're going to talk to John McLeod, the CISO at NOV, who has not only moved his organization far down the zero-trust journey, he did it during the pandemic, right? So it's a remarkable story.
Dave Bittner: Well, thank goodness for the interns (laughter). I guess you're going to have to double their meager rations this week as a reward.
Rick Howard: Yeah, indeed. We'll double their bread and double their water rations. They're so happy down there right now.
Dave Bittner: I'll bet. I'll bet. So that's over on the subscription side. What is on the public side of your "CSO Perspectives" podcast?
Rick Howard: We are unvaulting a Rick the Toolman episode from June of 2022 on the current state of intelligence sharing. And we're talking to some pioneers in the field, some of the original members of the FS-ISAC that got the ball rolling back in the early 2000s. We have Denise Anderson. We got Errol Weiss and Byron Collie, just to name three.
Dave Bittner: Well, before I let you go, what is the phrase of the week over on your "Word Notes" podcast?
Rick Howard: This week, we're covering ZTNA. That's Zero Trust Network Access. It seems to be a theme for our little conversation here, right? But these are the technologies that directly support the zero-trust strategy. And we even hear from the father of the concept, John Kindervag.
Dave Bittner: All right. We'll look forward to that. Rick Howard, always a pleasure to speak with you. Thanks so much for joining us.
Rick Howard: Thank you, sir.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliott Peltzman. The show was written by John Petrik. Our executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.