The CyberWire Daily Podcast 7.27.22
Ep 1628 | 7.27.22

The cost of a data breach as an economic drag. Personal apps as a potential business risk. Why so little ransomware in Ukraine? Employee engagement study reaches predictably glum conclusions.

Transcript

Dave Bittner: IBM reports on the cost of a data breach. Personal apps as a potential business risk. Over on the dark side, there is help wanted in the C2C labor market. An employee engagement study reaches predictably glum conclusions. Betsy Carmelite from Booz Allen Hamilton on reducing software supply chain risks with SBOMs. Our guest is Elaine Lee from Mimecast, discussing the pros and cons of AI in cybersecurity. And why so much attempted DDoS, but not so much ransomware?

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Wednesday, July 27, 2022. 

IBM on the cost of a data breach: automation pays, and so does incident response planning. 

Dave Bittner: IBM Security has released its 17th annual Cost of a Data Breach Report. The research, conducted by the Ponemon Institute and sponsored, analyzed and published by IBM Security, analyzed 550 organizations that fell victim to a data breach between March of 2021 and March of 2022. Researchers found that 83% of organizations had more than one data breach. It was discovered that 60% of the breaches led to increases in customer prices, with the costs of a data breach averaging $4.35 million. The critical infrastructure sector was disproportionately impacted financially by breaches, with impacted organizations averaging costs of $4.82 million. It pays, however, to have protection in place. Just over $3 million was saved on average by companies with fully deployed security AI and automation systems. And $2.66 million was saved by companies with an incident response team and plan. 

Dave Bittner: IBM thinks data breaches are having an effect upon economic conditions in general. The company said, the findings suggest these incidents may also be contributing to rising costs of goods and services. In fact, 60% of studied organizations raised their product or service's prices due to the breach, when the costs of goods is already soaring worldwide amid inflation and supply chain issues. The toll breaches exact amounts to an invisible cyber tax. 

Personal apps as a potential business risk. 

Dave Bittner: Netskope has released a report detailing the common use of personal apps in business. Cloud app use has seen an increase of 35% just since the beginning of 2022, with the average mid-sized business with between 500 and 2,000 employees using 138 different apps. Personal app and personal instance usage increases in the 30 days before employees leave an organization, with 20% of users uploading unusually high amounts of data before their departure. This might be innocent, but it does inevitably raise suspicions. Netskope explains the distinction between a personal app and a personal instance. They say, a personal app such as WhatsApp is an app that only sees personal usage from personal accounts. A personal instance is a personal account of an app that is also managed by the organization. For example, someone's personal Gmail account in an organization that uses Google Workspaces is a personal instance. 

Dave Bittner: The current trend represents an increase of 33% from the same time last year. Personal app usage is most prevalent in the retail sector, with nearly 4 in 10 employees using them, and it's least prevalent in the financial sector, where fewer than 1 in 10 employees were found to be uploading, creating, sharing and storing data. Interestingly, it was found that many organizations use apps with overlapping functionalities. Mid-sized companies on the average use four webmail apps, seven cloud storage apps and 17 collaboration apps. This obviously suggests an unnecessary expansion of an organization's attack surface. 

Why so much attempted DDoS, but not so much ransomware?

Dave Bittner: The Council on Foreign Relations looks at the recent record of Russian cyber operations, particularly from the country's privateers, and asks why ransomware attacks against Ukrainian targets seem to have fallen off after an initial wave of pseudo-ransomware wiper attacks. After all, it's not like a gangland isn't connected to the Organs. Conti is, or at least was, tight with the FSB, and EvilCorp danced with both the FSB and the SVR. So it's not as if there's a lack of either juice or direction. They suggest a range of reasons for this but come down in the end to the privateers' profit motive. But Ukrainian victims are unlikely to have much incentive to pay their ransom and may have small ability to do so, even in the unlikely event that they wish to. None of this minimizes the ransomware gangs' connections to the Russian security services, nor should it be taken as a council of complacency - rather the opposite. If you look like you could pay, you can expect to be regarded as a potential target. 

Employee engagement study reaches predictably glum conclusions.

Dave Bittner: Tessian has shared the results of an employee engagement study detailing that nearly 1 in 3 employees, on average, do not believe that they play a part in the cybersecurity of their company. Reportedly, only about 39% of employees surveyed say that they're very likely to report a security incident, with 42% of respondents reasoning that they wouldn't know if they caused a security incident and 25% saying that they just don't care enough about cybersecurity. About three-quarters of organizations have experienced a security incident in the last year, despite IT and security leaders ranking their security posture as 8 out of 10 on average. 

Dave Bittner: Nearly half of all security leaders say training is one of the most important parts of the cybersecurity puzzle, but only 28% of employees in the United Kingdom and United States report that they find the training engaging. And alarmingly, only 36% pay full attention to the training. We don't want to throw the first stone here. After all, we all remember our high school careers, and 36% of our full attention would have made our teachers proud. But maybe an hour of PowerPoint once a year in the break room isn't the royal road to practical wisdom in these matters, even if donuts and coffee are provided. 

Help wanted (C2C edition).

Dave Bittner: And finally, maybe this great resignation we keep hearing about is a problem for the criminal market, as well. Huntress contacted us yesterday with a note about the way they're seeing threat actors target managed-service providers in their supply chain attacks. They said, Huntress researchers discovered a Beeper thread from July 18, 2022, looking for a partner to help process stolen data from over 50 American MSPs, 100 ESXi and more than 1,000 servers. The hacker boasted a high profit share with only little left to do before exploiting the data. Huntress reminds us that this also seems to corroborate the threat to MSPs the Five Eyes warned of on May 11 of this year, the Five Eyes being Australia, Canada, New Zealand, the United Kingdom and the United States. Their observations also confirmed something about the C2C market - its criminal players suffer from the same human resources challenges the rest of us do. 

Dave Bittner: Here's the text of what amounts to a criminal's help wanted ad. Looking for a partner for MSP processing. I have access to the MSP panel of 50-plus companies, over 100 ESXi, 1,000-plus servers. All companies are American and approximately in the same time zone. I want to work qualitatively, but I do not have enough people. In terms of preparation, only little things are left. So my profit share will be high. Please send me a message for details and suggestions. 

Dave Bittner: Well, friend, here's a suggestion. Your profit share might be high, but why would your prospective employees care about putting Dogecoin in your wallet - cold, virtual or otherwise? I mean, they have expenses and obligations, too. What about their profit share? Well, things are tough all over. Here's a thought. Promise the goons you hire that you'll never make them sit through a quarterly PowerPoint in the break room. People hate that, or so we hear. We don't do break room training at the CyberWire headquarters. But if we did, we'd certainly provide donuts and coffee. 

Dave Bittner: Elaine Lee is a principal data scientist at Mimecast's Cybergraph team. And I caught up with her for her take on artificial intelligence for cybersecurity - where it works well and where it's still got a ways to go. 

Elaine Lee: AI does very well at picking up on anomalies. So basically, with AI systems, they have a lot of computing power and capacity at their disposal. So it is able to just leverage all of that to be basically super vigilant - hyper vigilant. So an average human can only pay attention to so much at the same time, can only incorporate and make use of so much data about its environment at the same time. 

Elaine Lee: Theoretically, an AI system does not have as strict limitations as a human would, so it's able to pay attention to much more. And as a result, it makes it very - it makes AI systems very suited for anomaly detection, so basically just looking for anything that's out of the ordinary. So that's what it's very good at. And I think a lot of adaptive AI and ML systems are built around this sort of idea that is - you know, just look for something that's a little odd and then alert a human about it. 

Dave Bittner: And where does it come up short these days? 

Elaine Lee: So kind of going back to something I said earlier about generative systems, just basically new stuff entering the scene. I don't think we're quite there yet, but AI systems might not be very adept at identifying new threats - new and emerging threats. I think a lot of AI current - a lot of existing current AI systems are built around recognizing known attack vectors, known - just basically things that have been seen before, attack types that have been seen before. So if you tried to do something new, it would take a while for an AI system to pick up on it if it ever picks up on it. So oftentimes you still need that human in the loop to train the AI system to recognize the new threat type. So that is - long story short, it's not very good at detecting very, very novel threats. 

Dave Bittner: So where do you suppose we're headed then? I mean, as you look at some of the things that are on the horizon or where the technology is headed, what does the future hold? 

Elaine Lee: I think AI/ML systems will get better at detecting novel threats. Kind of touching upon something I said earlier about generative types of models, maybe if it's - if the quality ends up being good enough that those generative models - those outputs from those generative models could be used to inform new AI/ML systems for defense. So that could be where we're headed. 

Dave Bittner: In terms of people integrating this into their security defenses, how should they best calibrate the part that artificial intelligence plays? 

Elaine Lee: It varies by organization, by team, by company, by culture. So the best way to - the best advice I give is to just make sure the human is in the loop. Make sure you have a human that's involved in the calibration. Also understand the organization that you're trying to protect. The IT admin is definitely the best person for that. You know, that's - that would then be their job, to understand the population, the group of users that they're trying to protect and therefore use - utilize the tools that's available to them, you know, be it AI/ML-enabled or just your average, just your ordinary cybersecurity tools. Just utilize them effectively to protect your team. It's not one-size-fits-all advice, but maybe the only advice I - the only general advice I could give is make sure the humans remains in the loop and can calibrate and adapt their security solutions to meet the needs of their teams. 

Dave Bittner: That's Elaine Lee from Mimecast. 

Dave Bittner: And joining me once again is Betsy Carmelite. She is a principal at Booz Allen Hamilton, as well as being the federal attack surface reduction lead. I want to touch base today on software bill of materials, also known by the catchy name SBOMs. I wanted to get your take on where we stand with this and what are some of the things that this is potentially going to do for us? 

Betsy Carmelite: Yeah. Thanks, Dave. Glad to be back. Following cyber events like SolarWinds and Log4Shell, SBOMs have gained massive attention as a solution for warding supply chain attacks. And so just for a definition of terms and understanding, an SBOM is, like, an itemized receipt for software to give software producers, buyers, operators a greater understanding of the supply chain, so they can better track down vulnerabilities and risks, enable security by design, and make informed choices about software supply chain logistics and acquisition issues. 

Dave Bittner: So in terms of integration, I mean, what are organizations have to do to be in compliance here? 

Betsy Carmelite: Yeah. To envision how SBOMs fit into an effort to counter software supply chain risks in a more integrated way, we're recommending starting with a well-known military symbol. So we've created a framework around this, the trident. So this represents the cross-functional effort needed to counter software supply chain threats. So if you look at the trident, the longest prong in the center is a set of techniques used for hunting advanced, persistent threats. And on each side is another prong. One is SBOM implementation on the left and augmented data risk management on the right. So tips to kind of wield this framework within that trident include ensuring your organizational policies and procedures allow for fast-moving implementation of the framework across software that touches all segments of your organization and consistently applying it, also engaging your employees regularly around ways to detect malicious activity through cyber awareness programs. This is really your basic cyber hygiene approach. Always educate and inform. Third, use APT techniques to discern vulnerabilities in tandem with your SBOM analyses, and then use the data collected from the SBOM analysis process and incorporate it directly into your risk management processes. Finally - we can't forget, we talk about it all the time - adopt the SBOM concept in concert with the zero trust security mindset. 

Dave Bittner : So is the notion here with SBOMs that, you know, for example, the next Log4j comes out or something like that that's, you know, deep within the code of things I might be using, that I can just look through that software bill of materials and see whether or not I've got a problem? 

Betsy Carmelite: Well, this is going to be one of the challenges. So cataloguing and understanding all of the information that SBOMs contain is really going to be one of the first things that agencies will need to do. So finding that software, you're really going to have to make sure your cataloguing is accurate and valid. So we see that as one of the challenges. There are a couple other challenges around implementing this, especially on the regulatory front. Just to back up a little bit, there is an entire section of the executive order devoted to software supply chain security. And we also expect forthcoming OMB guidance on secure development practices to make SBOM the standard for vendor self-attestation. And so basically, the OMB guidance will be put into contractual terms by agencies. And depending on the release date of that guidance, we may see some proofs of concepts appear before the end of fiscal year 2022. But it's - that guidance is likely not to be overly prescriptive. 

Betsy Carmelite: Going back to some of the challenges there, right now agencies currently don't have the staff to meet OMB's guidance and requirements. The cataloguing, again, cyber fatigue, there are constantly new developments rolling out and new guidelines to follow. Sometimes it can be difficult to keep up with every new development. Especially as we see OMB previously requiring agencies to comply with NIST's secure software development framework, that was precipitated by SolarWinds in 2020. Eventually, software vendors will be eventually expected to prove their compliance with that NIST framework. And vendors prefer self-attestation rather than third-party verification. So for the SBOM to be standardized and practically used, it's going to require deadlines for vendors and a concrete process that can be reapplied over and over again for using the information those SBOMs contain. 

Dave Bittner : And what sort of timeline do you suppose that we're on here? 

Betsy Carmelite: So we've seen the Biden administration at least put a clear timeline in place for complying with the adoption to zero trust and other measures. So we know that whatever timeline will be practically applied, and it's going to be helpful for federal contractors to allow them enough time to budget for further changes. So we're expecting at some point, you know, within this fiscal year for that guidance to come out. 

Dave Bittner : All right. Well, Betsy Carmelite, thanks for joining us. 

Dave Bittner : And that's the CyberWire. For links to all of today's stories, check out our daily briefing at thecyberwire.com. The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Liz Irvin, Puru Prakash, Justin Sabie, Rachel Gelfand, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.