ChatGPT continues to become more human, this time through hallucinations. Following Cl0p. Instagram works against CSAM. And data protection advice from an expert in attacking it.
Dave Bittner: ChatGPT takes an unexpectedly human turn in having its own version of hallucinations. Updates on Cl0p's ransom note, background, and recent promises. Researchers look at Instagram's role in promoting CSAM. A look at KillNet's reboot. Andrea Little Limbago from Interos shares insight on cyber's human element. Our guest is Aleksandr Yampolskiy from SecurityScorecard on how CISOs can effectively communicate cyber risk to their board. And a hacktivist auxiliary's stellar advice for protecting your data.
Dave Bittner: I'm Dave Bittner with your CyberWire intel briefing for Thursday, June 8th, 2023.
ChatGPT “hallucinations.”
Dave Bittner: Open AI's ChatGPT chatbot is often described as having their own version of hallucinations, and these aren't the kind that medicine can fix. Researchers at Vulcan Cyber warn that attackers can use ChatGPT to trick developers into installing malicious packages. Noting that developers have begun using ChatGPT for coding assistance, the researchers state that they've seen ChatGPT generate URLs, references, and even code libraries and functions that do not actually exist. These articles are the hallucinations referred to. Large language model hallucinations, Vulcan says, have been seen in the past attributable to old training data. If ChatGPT can fabricate false code libraries, an attacker could theoretically create a package that replaces the one that ChatGPT recommends. Victims could then download and use it.
Cl0p's current ransom note.
Dave Bittner: Following Cl0p's ongoing efforts to extort victims affected by its exploitation of a MOVEit vulnerability, reports say the gang has issued demands to negotiate ransoms to potentially hundreds of victims. The Register reports that the ransomware group, in an uncharacteristic move, gave a June 14th deadline for victims to contact the attackers. This change of tactics, as ITPro reports, could be due to the unusually large amount of data stolen by the group, saying that members of the cybersecurity industry have speculated that Cl0p has ingested too much data for it to identify the company to which it belongs.
Cl0p may have had this exploit since 2021.
Dave Bittner: According to research from Kroll, Cl0P could have discovered the MOVEit zero day exploit as long as 2021. They explained that Kroll's review of Microsoft Internet Information Services' logs of impacted clients found evidence of similar activity occurring in multiple client environments last year, April 2022, and in some cases as early as July 2021. Kroll also advises companies using MOVEit to check in their disk drive’s directory for suspicious .aspx files, such as “human2.aspx” as indicators of compromise (IoC). Progress, the software developer of MOVEit, has created a web page for the vulnerability that describes mitigation steps and provides situation updates.
Cl0p’s recent promises, and negotiations with ransomware gangs.
Dave Bittner: Can you trust what a ransomware gang says when it's negotiating? Experts say -- probably not. CBC Canada reported yesterday that Cl0p has claimed they've deleted all government data from their site. Emsisoft threat analyst Brett Callow wrote in an email that the claims should be assumed to be false, highlighting the fact that there is no reason for a criminal enterprise to simply delete information that may have value. And even if it were deleted, he reminds us they still conducted the breach in the first place. Businesses today aren't exactly making it difficult for ransomware attackers either. TechRadar writes that the amount of small- and medium-sized businesses in the United Kingdom deciding to cough up the cash when victimized in a ransomware attack has increased significantly over the past year. A Censornet report shared that the shift to giving in seems to stem from the general incapability of companies to manage their cyberthreats. Email attacks were the primary vector against companies in the past year, and the research shows that firms would benefit from better, more widespread threat solutions.
Researchers look at Instagram’s role in promoting CSAM.
Dave Bittner: An investigation by the The Wall Street Journal and researchers at Stanford University and the University of Massachusetts Amherst has found that Instagram's algorithms have a vast network of accounts openly devoted to the commission and purchase of underage sex content. Researchers from the Stanford Internet Observatory discovered many accounts of those claiming to be minors that are openly advertising self-generated child sexual abuse material for sale. The researchers uncovered similar networks on Twitter and Telegram, but note that Instagram, between recommendation algorithms and direct messaging capabilities, is the most important platform. According to the Journal, a Meta spokesperson acknowledged that the company had failed to act on reports of child sex abuse content and the company condemned the behavior, asserting that investigations against these acts are in place.
A look at KillNet's reboot.
Dave Bittner: KillNet's reorganization of the hacktivist auxiliary they lead on behalf of the Russian Intelligence and Security teams continues. Radware describes the reboot as a move toward a more professional, better-disciplined organization. KillNet had hitherto been willing to present itself as a grassroots movement, but no more. Radware writes that, "the revised KillNet isn't for armchair hackers and DDoSers, nor is it a platform for self promotion or a ticket to overnight fame. Only the most astute will find their place in the new KillNet auxiliary," says the cybersecurity firm. So, Ivy League grads, there's finally a place that can put that well-earned degree to use. Maybe just not one that you'd want to write home about.
Consumer best practices from a hacktivist auxiliary.
Dave Bittner: And, finally, who better to advise you on how to protect your data than someone that wants to steal it? NoName has been posting interesting IT stories from the Russian perspective and, with it, they also publish their own tips to protect your financial data online by following twelve simple steps. "How can you defend your financial assets on the internet?" they ask, helpfully and rhetorically. And then go on to offer some advice on digital hygiene. NoName seems to be positioning itself as a community leader, offering advice and information to ordinary users. So why are they doing all this? They're positioning themselves as a source of news for the Russian domestic audience. And the stories they're offering are all straight out of the Kremlin's script.
Dave Bittner: Coming up after the break, Andrea Little Limbago from Interos shares insights on cyber's human element. Our guest is Aleksandr Yampolskiy from SecurityScorecard on how CISOs can effectively communicate cyber risk to their board. Stay with us.
Dave Bittner: There's that old saying, that bit of wisdom that everyone has a boss. No matter the title on your business card, there's someone to whom you are accountable. And for the cyber security leadership in many organizations, that means the board of directors. These past few years have undoubtedly seen board members increase their knowledge and understanding of cyber security issues, but bridging the gap between cyber risk and business risk can still be a challenge. Aleksandr Yampolskiy is CEO at SecurityScorecard, and I checked in with him for insights on how CISOs can effectively communicate cyber risk to their board.
Aleksandr Yampolskiy: There's not been a lot of KPIs for cyber security, and I experienced that personally when I was a CISO at Gilt Groupe. I knew what my budget was. I knew how much money I was spending. But I had no idea -- if I spent a million dollars on the latest, greatest endpoint security technology, I had no way of quantifying if I became one percent safer, two percent safer, zero percent safer. And that complete lack of KPI and ability to really quantify a ROI is a big issue because what you cannot measure, you cannot improve.
Dave Bittner: Yeah. It seems to me like, for a lot of folks, they're stuck with that frustrating message which is, you know, we spent all this money and nothing happened. Good news!
Aleksandr Yampolskiy: And, furthermore -- yeah, and furthermore, not only that people can't measure things they spent the money on, and they can't quantify the risk, they also lack complete visibility of their business partners because they could be protecting themselves, but then they're spending millions of dollars, for example, to host their solution in a cloud provider, or they're spending millions of dollars with a law firm, but they have no idea if the information is being protected. Still goes back to measurement and quantification. In any other field, we have metrics. You drive a car, you have a speedometer showing to you how fast you drive. You go to a board meeting to review financials, you have gross margin, LTV to CAC, EBITDA. And in cyber security, we have pretty much nothing.
Dave Bittner: Well, what do you propose? I mean, what sort of measurements are available to us that we could turn into meaningful numbers?
Aleksandr Yampolskiy: Well, you know, I think that in any industry there's no one number that magically captures every little nuance of a situation. Right? So a number is not a substitute for a human judgment, but that was actually the impetus for starting SecurityScorecard. Security Scorecard is a security ratings response and resilience company where we came up with a way of how to objectively and in a trusted way measure security for any company in the world from outside, and how to give security teams, really, a complete understanding of the risk their business ecosystem poses, their partners, contractors, third party and fourth party vendors. And so we came up with this platform that really provides KPIs and measurements for over twelve million organizations worldwide.
Dave Bittner: Well, can you give us some insights as to exactly how you go about doing this sort of thing?
Aleksandr Yampolskiy: Yeah, of course. So the way that we do it is there are three steps in a process. So, number one, we collect the signals unobtrusively from outside. And the signals are heterogeneous. For example, it could be indications that you have a malware infection within the company or that you have a set of SSL misconfigurations where you did not configure them properly, or you have a number of patches that are missing. So we collect this data, unobtrusively, from all over the world. Then, for every company in the world, we discover a tech service for that company. What are the business units? What are the subsidiaries? And then, third, we compare companies to similar companies. For example, if you have a hundred malware infections, we don't know if it's good or bad, so we're going to take a look at other similar-size companies with the same tech service and see. If they have two hundred malware infections and you have a hundred, then you're actually twice as good as everybody else. So you can see how many standard deviations you are from what the median of the pack is, and then we calibrate it based on nine years of historical research and we assign a score representing likelihood of a company to -- to get hacked. And so we actually publish the algorithm at trust, that security scorecard that come, you could go to our website and we're very transparent about how it works, what we'll measure, and what ingredients go into it. But it's basically based on comparing yourself to what median is in the industry and how you compare to others.
Dave Bittner: Can you give us some insights as to what goes into quantifying the cyber risk?
Aleksandr Yampolskiy: Well, look -- to quantify -- you know, to quantify the cyber risk, you need to have a set of objective outside-in and inside-out indicators. And any type of quantification needs to be objective, not subjective. It needs to be transparent, where you publish the methodology. And it needs to be -- you know, it needs to be predictive. So you have outside-in data points. For example, you have data points like -- how does your tech service appear to an attacker? Are you patching your vulnerabilities fast enough? Like, how fast do you take to remediate known issues? What are the indicators of poor hygiene from outside, such as you might look at a website and observe that. You have an out-of-date copyright notice. It's not a vulnerability, but it's an indication that the company is not keeping its systems diligently up to date. Right? And then you can also have inside-out components. For example, a company giving you SOC 2 attestation, a pentest report, architecture diagrams. So, really, in order to measure security you need to have a 360 view -- outside-in, inside-out -- and then you have to be able to plug it into quantification models to really express how much money you could lose if a particular event occurs, like a DDoS attack or a ransomware.
Dave Bittner: And so what are your recommendations for, you know, a cyber security professional who goes down this path and then has to translate that for the board of directors, or the higher-ups in the company?
Aleksandr Yampolskiy: Well, yeah, hundred percent. So CISOs lack a common language for discussing cyber security risk with business executives. Board members are used to communicating in financial terms, and discussing how risks and opportunity translate to organizational results. So my advice for CISO, you have to speak the language of the board. Talk about what business outcomes you're trying to prevent. For example, you could say I'm spending $300,000 to mitigate a potential $2 million outage due to denial of service attack. Whenever possible, CISOs should report in financial terms. How do you translate cyber risk into potential financial impact? Scenario planning is also a powerful technique that CISOs can use to create effective cost benefit analysis. I think also the CISOs should really encourage the board to bring on a cyber expert. A board member with a strong cyber security awareness and background can help support the CISO by amplifying the importance of their cyber security investments. Form a cyber security committee at the board level, but start talking about business outcomes. Start doing scenario planning, quantifying the possible risk in financial terms, and the cost/benefit analysis and create a special cyber committee on the board where you bring a cyber security expert. That would be some pieces of advice.
Dave Bittner: That's Aleksandr Yampolskiy from SecurityScorecard.
Dave Bittner: And joining me once again is Andrea Little Limbago. She is Senior Vice President for Research and Analysis at Interos. Andrea, it's always great to welcome you back. You know, you and I were talking about this year's RSA Conference and the theme of the human element. And you've been part of RSA Conference, of helping to -- with some of the programs and things there. Where do you suppose we stand when it comes to that notion of the human element and cyber security?
Andrea Little Limbago: Yeah. Thanks for having me, Dave. It's interesting. I think, you know, a decade ago it really wasn't discussed all that much, and now it's almost taken for granted. So that -- that alone, to me, is a great transition, acknowledgement. I think we still tend to see quite a bit on, you know, blaming the user. User is stupid. There's nothing you can do. And that -- you know, that defeatism, you know, doesn't necessarily help and it certainly doesn't help in creating technologies that take for granted that humans are going to click on things and may be imperfect. But I'd say that -- that the segment of, you know, the community that still kind of is -- is in that paradigm is decreasing and we're seeing more and more the objective of -- how can we create technology that works, given human flaws? Because we all have them and we're all -- you know, it can be a very easy to be tricked into clicking on something that's very targeted at you. So I think we're seeing, you know, really, like a nascent movement in some of the innovation for how we can create technologies that take into account human fallibility, but also then help provide the defenses that take it into account. And so a couple of different areas that we're seeing that. One is you just -- really, the notion of security culture. I think that helps a lot. And that's, sort of, the non-technology. So I think, perhaps, we looked at, you know, people, processes, and technology. The processes is -- is a good part where the security culture -- we saw a lot of interest in that for submissions for RSA this year.
Dave Bittner: Hmm.
Andrea Little Limbago: I think that's great. And I think -- but it's interesting because, on the one hand, it seems like it's been talked about for quite a bit, but it is something that's really hard to do. And -- and I think anyone who's worked in an organization knows that creating a culture is a -- you know, is very, very hard. And destroying one is actually -- it's quite easy.
Dave Bittner: Hmm.
Andrea Little Limbago: And so making sure that you're building a security culture that enables people to feel comfortable saying they may have done something wrong, versus penalizing them for it, you know, can -- can go a long way. And so there was a fair amount of interest and innovative ways for the security culture. And that's -- actually, some of the technology can come into play as far as, you know, your different gaming solutions. Tell people -- you know, make it more the gamification of -- of security to help them understand and learn and make it more interesting than a -- than a click-through PowerPoint might be. And so I think that that sort of a -- that's an interesting way -- scenario that we've seen. I'd say also there's a lot of discussion on a metaverse and how it can think about security before it becomes widespread. And that -- one, I think that that's great that, you know, we're at a point of maturity where we can think about security as the technology is really still being -- being built and growing instead of it being an afterthought.
Dave Bittner: Hmm.
Andrea Little Limbago: So I'm, you know, cautiously optimistic about that, but still I think there isn't enough discussion on it. There -- there will be some discussion on it and how to think about that but, you know, the metaverse introduces all sorts of the very similar kinds of problems that we see currently on the internet.
Dave Bittner: You know, you mentioned maturity and -- and, in my mind, I think that's a big part of it. I -- I wonder about the -- the kind of professionalization of cyber security that we've seen over the last decade or so. You know, it's not -- it's no longer that elite group of, you know, hackers who came up with their soldering irons and, you know [laughing], working hard, you know, throughout the night. It's not so individual based anymore, is what I'm saying. And -- and I think, as you say, that's leading to more diversity in both the types of people, but also in thought. And -- and so it seems to me like that's a big part of what's leading us to better solutions.
Andrea Little Limbago: Yeah. And I think that's absolutely right. The professionalization of the industry which I know some have kind of pushed back against but, for the most part, I mean, it's --it's here. We need to see -- every company, you know, has -- has security concerns at this point. And so the professionalization of it has helped in that manner and it has -- I think that, coupled with the many diversity efforts that we see and, you know, coupled with, you know, just the way academia has -- has changed and evolved as well to integrate various aspects of security into -- you know, across disciplines in many regards. And there's still a ways to go, but I think all of those together really have helped, you know, increase the maturity of it. Yeah, it's more and more discussion of CISOs being, you know, being in the C-suite now versus being on the side.
Dave Bittner: Right. "C" in name only.
Andrea Little Limbago: Yeah. Yeah, exactly. That's exactly --
Dave Bittner: Right.
Andrea Little Limbago: And so I -- I think that -- I think that helps out a lot. Again, I think there's -- there's a -- there's a ways to go, but the professionalization of it really will also help us provide and learn lessons from others we can then share. And that's -- I think that's also a core component is, you know, the information sharing. We often think about that as, you know -- you know, sharing on IOCs or -- which is important, but sharing lessons learned on -- on how to build a security culture, for instance, or how to deal with insider threats, those kinds of lessons learned, sharing those across the profession, is really, really important. And I think that there's something else that we're seeing is, you know, rare desire to help both acknowledge some of the challenges that you're having and seeing if others are -- have figured out a way to solve those. Or if something is working, sharing that with others so they also can because, at the end of the day, you know, we're trying to raise up all -- all boats. Right? Because the thing about -- with your supply chain -- very often it's the -- the company that you maybe have the partnership with that has the lowest level of security that could, then, be the entryway into your own company. And so it is in everyone's best interests to help raise up the cultural -- you know, this awareness across all of your partners and across the entire industry. And so it's nice seeing a movement in that direction. You know, I think it's a little overdue and it's still not where we need it to be, but it's great seeing a broader awareness and encouragement of -- of collaboration to help everyone build a better security culture.
Dave Bittner: Yeah, for sure. All right. Well, Andrea Little Limbago, thanks for joining us.
Andrea Little Limbago: Okay, thank you, Dave.
Dave Bittner: And that's The CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. We'd love to know what you think of this podcast. You can email us at cyberwire@n2k.com Your feedback helps us ensure we're delivering the information and insights that help keep you a step ahead in the rapidly-changing world of cyber security. We're privileged that N2K and podcasts like The CyberWire are part of the daily intelligence routine of many of the most influential leaders and operators in the public and private sector, as well as the critical security team supporting the Fortune 500 and many of the world's preeminent intelligence and law enforcement agencies. N2K's strategic workforce intelligence optimizes the value of your biggest investment -- your people. We make you smarter about your team, while making your team smarter. Learn more at N2K.com. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliott Peltzman. The show was written by Rachel Gelfand. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.