CISA would like agencies to look to their management interfaces. Hacktivist auxiliaries and a role for OSINT in Russia’s hybrid war against Ukraine.
Dave Bittner: The Feds are working to secure management interfaces. NoName's DDoS campaign grows and targets Wagner. An update on the unidentified hackers attacking a Russian satellite communications company. The role of OSINT in tracking Russia's war. Rick Howard speaks with Becky Weiss from AWS about the hard math behind security. Our guest is Manoj Sharma of Symantec to discuss the security implications of generative AI. And cyber awareness over a holiday.
Dave Bittner: I'm Dave Bittner with your CyberWire Intel briefing for Friday, June 30th, 2023.
US Federal Government working to secure management interfaces.
Dave Bittner: Earlier this month, the U.S. Cybersecurity and Infrastructure Security Agency issued binding operational directive 23 02, mitigating the risk from internet exposed management interfaces. Researchers at Census have discovered hundreds of qualifying devices that will need to be secured in order to comply with the directive. The company's report says Census researchers conducted analysis of the attack surfaces of more than 50 federal civilian executive branch organizations and suborganizations. Throughout our investigation, we discovered a total of over 13,000 distinct hosts spread across more than 100 autonomous systems associated with these entities. Examining the services running on these hosts, Census found hundreds of publicly exposed devices within the scope outlined in the directive. The researchers add, in the course of our research, we discovered nearly 250 instances of web interfaces for hosts exposing network appliances, many of which were running remote protocols, such as SSH and Telnet. Among these were various Cisco network devices with exposed adaptive security device manager interfaces, enterprise cradle point router interfaces exposing wireless network details, and many popular firewall solutions, such as Fortinet FortiGuard and SonicWall appliances. So, as CISA is well aware, there is a lot of work remaining to be done here.
NoName057(16)’s DDoSia campaign grows, and targets Wagner, post-insurrection.
Dave Bittner: Turning to Russia and its war against Ukraine, the Wagner right mutiny is having an effect on conflict in cyberspace. Much of that conflict is potentially self defeating. Consider the DDoS network, the Russian hacktivist auxiliary NoName057(16) has built up to serve the state. We'll call them NoName for short. Since NoName's distributed denial of service public recruiting campaign DDoSia began in early 2022, its participants have grown significantly. BleepingComputer reports that 2400% growth in active users, 10,000 as of June 29th, and its telegram channel has swelled to over 45,000 people. Sequoia released a detailed report regarding the paid DDoS service offered by NoName, and writes, we clearly identify that the pro Kremlin hacktivist group NoName primarily focuses on Ukraine and NATO countries, including the eastern flank; Lithuania, Poland, Czech Republic, and Latvia, it is highly likely that this stems from the fact that these countries are the most vocal in public declarations against Russia and pro Ukraine, as well as providing military support and capabilities. The group has been noted to reactively target countries as they express their support to Ukraine with armed shipments and anti Russian sentiments. These pro Kremlin hacktivists, as Sequoia calls them, began to attack Wagner sites on June 24th, which coincided with the Wagner mutiny and subsequent march on Moscow. Sequoia writes, this is the first observed attack against one single victim, as the NoName group usually targets an average of 15 different victims per day. Another considerable difference can be noted, while they usually do so for other victims, the attackers did not communicate about the attack on their telegram channel. It's noteworthy that NoName's group was quick and responsive. They were sharper on the uptake than Killmilk, probably leader of Killnet, who was observed partying it up in Rostov during the Wagner group's brief occupation of that city. Mr. Killmilk has been largely quiet since things fell apart for the Wagnerites Saturday evening. Maybe he backed the wrong horse.
Update: Unidentified hackers attack Russian satellite communications company, claiming to be Wagner.
Dave Bittner: Unidentified hackers claiming to be the Wagner PMC group targeted the Russian satellite communications company Dozor, and have defaced several websites with the Wagner logo. CyberScoop writes, the group posted a link to a zip file containing 674 files, including PDFs, images, and documents. On Thursday morning, the group also posted three files that appear to show connections between the FSB and Dozor. And the passwords Dozor employees were to use to verify that they were dealing with actual FSB representatives, with one password valid for every two months in 2023, according to a Google translation. Dozor Teleport was confirmed to be disconnected from the internet on June 29th by Doug Madory, Director of Internet Analysis for Kentik. The record reports, the hackers claim that they damaged some of the satellite terminals and leaked and destroyed confidential information stored on the company's servers. The group posted 700 files, including documents and images to a leak site, as well as some to their newly created telegram channel. One of the documents reveals a purported agreement that grants Russian security services access to subscriber information from Amtel Sivaz. Recorded Future News was unable to verify the authenticity of these documents. InformNapalm, a hacktivist intelligence organization working in the interest of Ukraine, has also reported on the attack on their telegram page, but they have refrained from naming Wagner as the group responsible. It should be noted that at the time of writing, no Wagner social media have claimed credit for this attack. A Ukrainian false flag operation remains very much a possibility.
The role of OSINT in tracking Russia's war.
Dave Bittner: One of the lessons taught by Russia's war against Ukraine has been the utility and prominence of open source intelligence in following the action. Observers have learned not to confuse cost with value, and a multitude of new sources networked and equipped with smartphones has altered the way in which journalists and even intelligence services follow developments. Sometimes the information posted by a Rando taking selfies in front of a railcar with tanks on it can be more valuable than what you're getting from a billion dollar hyperspectral sensing platform in low Earth orbit. Flashpoint has an overview of how OSINT has enabled the formation of a tolerably accurate picture of even so mirky an event as the Wagner Group's mutiny. They draw a lesson from how understanding of recent events has unfolded, stating in today's dynamic geopolitical climate, staying ahead of the curve necessitates more than just monitoring mainstream media. Open source intelligence collections have emerged as a game changing tool for keeping abreast of the latest events in Ukraine and Russia, which can help various organizations and sectors sift through the vast amounts of information, quickly filter out the noise, and deliver the most salient insights in real time. The recent events in Russia showcased the value of this intelligence resource, in offering a multifaceted perspective on ground realities. And Ouidad, it's striking to see the extent to which even mainstream legacy journalism has come to incorporate information gained from the social media crowd in its reporting. Social media, and especially Telegram, have been a principal source of information about the March on Moscow and its consequences. They've also provided a useful check on official statements. Anyone who's spent any time with social media knows the vast quantity of nonsense in circulation. But in some respects, they do function as a kind of marketplace of ideas, and a market that can function efficiently. Here's a suggestion for students of the field. When does social media form a self correcting source of information, and when do they wander into popular delusion? Some systematic understanding would be welcome.
Enjoy the fireworks, but be cautious around the 4th of July.
Dave Bittner: Holidays are traditionally times of heightened cyber threat. This weekend begins the U.S. Independence Day celebrations, and attacks are to be expected. See our website for some advice from industry experts. And remember, threats aren't really born on the Fourth of July, but they get itchy around the holidays. So, enjoy the fireworks, have fun at the parade, attend the barbecue. But stay safe online. We'll be enjoying the Fourth, and we'll be taking Monday and Tuesday off. We'll be back as usual on Wednesday. In the meantime, enjoy the holiday if you observe it. A British friend of the show does, only he celebrates it as good riddance day. That hurts. But still, the guy's got a point.
Dave Bittner: Coming up after the break, Rick Howard speaks with Becky Weiss from AWS about the hard math behind security. Our guest is Manoj Sharma of Symantec to discuss the security implications of generative AI. Stay with us.
Dave Bittner: Manoj Sharma is Global Head of Security Strategy at Symantec in their Enterprise Division. Like many of us, he and his colleagues watched with great interest the release of ChatGPT and other generative AI based on large language models with an eye on its potential for good or harm.
Manoj Sharma: We looked at this from multiple angles, if you will. I mean, Canvas tool isn't mature enough to produce something meaningful that can be used in a delicious kind of adapt. And as we were contemplating, I see there's this one call from one of our account managers, of course one of our largest customers. They're like, Manoj, we need to talk. I'm like, sure. And the opening statement the customer made was, oh, Manoj, we have an existential crisis. I'm like, coach me. Tell me more. And the answer was, well, I'm really afraid that this tool is going to create a lot of problems for us. And we are a bunch of experts that serve larger, smaller, all kind of communities. And I'm afraid that our experts would use this tool to find answers that they should be looking for in the researchers that we subscribe to, and so and so forth, real attributable resource knowledge. And they would go to this to find an answer to a question, get it wrong, and then use that in knowledge in making business decisions, if you will. And if that gets wrong, then I am opening myself to a lot of potential financial penalties in terms of lawsuits, if you will. So, what were you expecting me to do? They're like, I need guardrails. I need guardrails for people when they're going to these tools. Now, I can give you more examples on this front as imagine a company that processed the markets, if you will. A part of mortgage processing is letter writing. You write letters, right? I mean, and I did this for a minute. If somebody, you know, not saying it's going to happen, or happened, it's a possibility. And I'll give you two more examples on that front. Because the fact that if somebody who is processing that loan or application took that loan application and wrote a letter for it, didn't like the way it came out, and took all of that copy and all that text and gave it to ChatGPT to rewrite it, it's going to work, it's going to work great. But the problem is that that text may have some PII and very personal information about the individual or the entity you're doing business with. And you don't know how that information will be used by that language, large language learning models, if you will, and where it will show up. So, this vector has become a primary concern for our customers for losing important data. Either by, either by intent, or by a mistake.
Dave Bittner: Well, when you talk about guardrails, are we talking about something along the lines of, you know, security training that we do for our employees, or are we talking about technical solutions, or a blend of both?
Manoj Sharma: It has to be a blend of both. And let me put that in perspective. When you think about it, or even Broadcom, the company I work for, we have a policy that came out as soon as we discovered these tools. And how, you know, few other companies and Samsung got in trouble by, you know, using the code and sharing the code, if you will, on these tools. Our engineers are not supposed to upload or download code from these tools, if you will, right? Attribution and privacy and so on and so forth. And this is the reason why most of the larger banks in America have actually blocked access to this tool already from their environments. And so when you build a policy and you train your users on don't do this, and you have significant programs, and I agree, but then how do you enforce it, and how do you report on it, that it's not happening? So, when we talk about guardrails, then, of course, as a coaching and training of the employees that is happening, but in addition to it, and the technology controls and measures need to be put in place. And people don't do these things even accidentally. Right? So, what we do at Symantec, and this is, this is like a very small change in our intelligence that we generate it, and enabled our customers actually to clearly identify who are the users, and which users are going to which ones of these tools? There's so many tools out there, Dave. It's not just ChatGPT, it's not just, it's not just board, it's so many others. Sales force has one, AWS has one. I mean, name it, there's so many tools there. So, what we have done is clearly identified the traffic going to these tools, and who's going there, so you, as an administrator, or information deliverance program are, can sit down, analyze, well, why is engineering going there? They shouldn't be going there. I understand that human resources, recruiting, and marketing wouldn't go there because they do a lot of creative work. So, I will let them go. But not like engineering and other functions still use these tools. But if they have to, coach them. Coach them in a way that when the user gets to these tools, a little coaching page would show up and say, hey, I see you're going to this page. Please do so, please go to these app, do what you need to do, but please ensure that you read the term of use, how the information will be used. We recommend, we heavily suggest, and the policy states, whatever, how it's worded, coach the user on the way. So, the problem, Dave, is that, look, all the business, it doesn't matter where you work, our users are way too smart and way too savvy with internet technologies. If you block them from going to this tool, and everybody is very curious about it, if you block them from going to these tools from their network, from your portable devices, they'll find another way to get there.
Dave Bittner: Right. Well, and that was going to be my next question, which is, like, I can imagine this being an irresistible temptation when it comes to folks making their own shadow IT.
Manoj Sharma: Well, you bet. It was a classic use case with shadow IT. So, the idea here, the best way that we found working with our customers, is guardrails. Coach the users, let them go there, but coach them on the possibility of your data, of your uploading. We'll show up somewhere, and that may cause legal liabilities. Don't download code that may be copyrighted code. So, coach the users. Tell them. And then put additional controls that when the user is uploading something, or asking a question, you're monitoring what are they uploading? What does the coding looks like? So, our technology allows you to capture that data. And then if we find in that conversation that this is PII, this is sensitive data that shouldn't be going, we'll block it before it gets there in real time.
Dave Bittner: That's Manoj Sharma from Symantec.
Dave Bittner: And another episode of our continuing series of interviews that our CyberWire colleague Rick Howard gathered at the recent AWS Reinforce Conference, today Rick speaks with Becky Weiss from AWS about the hard math behind security.
Rick Howard: The CyberWire is an Amazon web services media partner. And in June 2023, Jen Eiben, the CyberWire's Senior Producer and I, traveled to the magic world of Disneyland in Anaheim, California to attend their AWS re:Inforce Conference, and talk with senior leaders about the latest developments in securing the Amazon Cloud. I got to sit down with Becky Weiss, a Senior Principal Engineer at AWS, and one of the keynote speakers at the conference, and we got to talking about the different ways people can learn the craft of cybersecurity. And she had some excellent advice.
Becky Weiss: In my opinion, someone's trying to learn the Cloud, it is the best vantage point from which to approach that learning journey is to actually start with security, actually concretely I tell people to start with the identity and access management service in AWS, because that's, that's at the center of everything. If you understand what's going on there, you're going to have a much easier path to learning anything, how anything works in AWS.
Rick Howard: I love his advice, right? Because that's not what most people, most veterans would not say that. But that's perfect way to get in, right? Because, like you said, it is the key to the whole security posture, right? And especially if you're trying to adopt some kind of zero trust strategy, right? So, what a great recommendation that is. That's fabulous.
Becky Weiss: Yeah.
Rick Howard: One of the things you said in your keynote was you were talking about being able to mathematically prove things. And that went right over my head, all right? And so I would love you to explain what that means to me. All right?
Becky Weiss: Well, you have to cover so much material so fast. So, we've made a very large investment in AWS into what we call automatic, automate, you can edit that one, automate.
Rick Howard: No, no, we're leaving that in. Totally leaving it in.
Becky Weiss: Okay, automated reasoning.
Rick Howard: Automated reasoning, okay, yeah.
Becky Weiss: So, this made its, as far as I know, this made its first appearance on the AWS stage. I'm going to say, I might have my facts a little bit wrong, but I'm close, 2018. We launched this feature for S3 called block public access. So, the problem we were working backwards from was this. You know, customers, you know, S3 is the focal point for where a lot of customers store their data in AWS. Most of it is in the S3 service. And, you know, of course, S3 buckets are secure by default. They're local to your account by default. They're not accessible from outside the account until you take some configuration step to affirmatively say, say so. And we had a lot of customers who were worried about somebody making the configuration mistake on that policy that allows outside access.
Rick Howard: Because that seems like, I mean, in the early days of S3 buckets, that seemed like the news headline, you know, someone forgot to configure the S3 bucket to do something.
Becky Weiss: Right. And there was, there was understandably a lot of, a lot of concern about that. And if you even go back to the, you know, to the birth of S3, like I once we find and looked, looked it up, and S3 storage for the internet, right, and one of those use cases in those early days before the rest of AWS existed, because S3 was either the first or one of the first, depending on how you look at it, was, let's host a website on this. This is a great place to host, you know, website assets so that the world can get to my website. But if you, you know, zoom forward a decade and a half or more, that's not really what, you know, even if you wanted to put a website on S3, which is a great use case for S3, you would use our CloudFront service, and you'd get all kinds of other, you know, you'd get better latency and, you know, and cashing behaviors and global distribution and all these things that the CloudFront service is exactly designed to do.
Rick Howard: Well, can I give you a for instance?
Becky Weiss: Yeah.
Rick Howard: I joined the CyberWire about three years ago, right? And I kept talking about a web server that, you know, where we distribute all of our content. And I'm thinking in my head, because I'm an old guy, right, that there's a server somewhere, either hardware or software, sitting in Amazon acting like a web server. And it took me a year and a half to realize it was just data in S3 bucket with Landa calls. That's how our website, there's no server, right, and it's like, and my head blew, all right?
Becky Weiss: And I've actually seen, particularly in the earlier days of AWS, I actually saw exactly that reaction from customers who were like, wait, what's, you know, how do I put a firewall in front of my Dynamite detail. Like that's not what's going on here, right? Like it's an API, right?
Rick Howard: Yeah, that's right.
Becky Weiss: So, you know, so we saw a lot of understandable concern over these, over these misconfigurations. And we had been investing in this team that specialized in automated reasoning techniques, so these are mathematically provable techniques that are based on, you know, you model a system, and it's able to, it's able to use all those things you learned about in that theoretical computer science class that they made you take. I really like that class, by the way. It's a cool class. But all of those techniques are used to prove, very specifically to prove that like one policy is or isn't more permissive than another policy. And from that, they can deduce, provably, whether the bucket has a policy on it that's allowing public access. And from there, they could block that when they see that happening. And that was, that was a very large step forward that we took, or that our customers were able to take, in just having confidence that the configurations are, that the configurations are what they want them to be. And if somebody ever misconfigured a resource with what they didn't want it to be, this thing would step in, block it, and, you know, and so they could be a lot more confident.
Rick Howard: What's the take away from your view from this conference? What should, as I'm leaving, going home tonight, what should I be thinking about?
Becky Weiss: I talk a little bit in the keynote about data perimeters. You know, this is very meaningful, probably both to me and to all of our AWS customers, because, like I said at the beginning, the very first thing that you think about, if you're going to move a workload to the Cloud, is not how do I build it, but how do I make sure that it's secure? Like just at a coarse grained level, how do I keep it out? You know, I have, my part of the Cloud, I need to keep the outsiders out, I need to keep the data inside it. Right? That's, you know, that's step zero. Right? We've got to figure out how to solve that before we can really do anything else. And our data perimeter's efforts work backward directly from that. And, you know, and we've made quite a few steps along this journey, meaningful progress for the last couple years, even before we were using the term data perimeters. This is something that we are very, very attuned to. And we have a great white paper on it that if, you know, if these ideas resonate with, if you're listening to this and any of the ideas resonate with you, look up AWS data perimeter. You're going to find a really actionable white paper with good guidance in it.
Rick Howard: Excellent.
Becky Weiss: And, you know, and we're not done. Like we're doing, we're doing a lot more there. And I'm really excited about that, because it's just so meaningful to anybody, you know, anybody who's, you know, trusting us, giving us the privilege of holding their data.
Rick Howard: I like the way they wrapped that up, because zero trust has given us concrete things we can do right away, it's no longer a theory, we can actually do some things.
Becky Weiss: Yes.
Rick Howard: And then a little bit of homework. Go read the data, the data perimeter paper.
Becky Weiss: Read that paper.
Rick Howard: And see what you can do. That's really good.
Becky Weiss: You'll definitely pick up something you want to do from there.
Rick Howard: Excellent. Well, thanks, Becky. Thanks for coming on the show.
Becky Weiss: Thank you so much for having me.
Dave Bittner: That's the CyberWire's Rick Howard speaking with Becky Weiss from AWS.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. Be sure to check out this weekend's research Saturday, and my conversation with Daniel dos Santos, Head of Security Research at Forescourt. We're discussing their insights from a recent exercise his team conducted on AI assisted attacks for OT and unmanaged devices. That's Research Saturday. Check it out. We'd love to know what you think of this podcast. You can e mail us at email@example.com. Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly changing world of cybersecurity. We're privileged that N2K and podcasts like the CyberWire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500, and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team, while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Irvin and Senior Producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliott Peltzman. The show was written by our editorial staff. Our Executive Editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here next week.