Terms of service and GDPR. LastPass breach update. GhostWriter resurfaces in action against Poland and its neighbors. Cellphones, opsec, and rocket strikes.
Dave Bittner: Ad practices draw a large EU fine and may set precedence for online advertising. Updates on the LastPass breach and on Russian cyber activity against Poland. Malek Ben Salem from Accenture explains smart deepfakes. Our guest is Leslie Wiggins, program director for data security at IBM Security, on the role of the security specialist. And cell phones, opsec and the Makiivka strike.
Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Wednesday, January 4, 2023.
Ad practices draw a large EU fine (and may set precedents for online advertising).
Dave Bittner: Meta's advertising practices have drawn a fine of roughly $223 million from European authorities. Meta is the corporate parent of Facebook, Instagram and WhatsApp. The Wall Street Journal reports that what's at issue was Meta's behavioral ads, which pitched specific ads to users based upon Meta's tracking of the user's online activity. Ireland's Data Protection Commission, which oversees activities of U.S. companies on behalf of the larger European Union, announced the conclusion of its two investigations and the fines. In summary, Data Protection Commissioners in Ireland found that Meta Ireland violated transparency obligations under the General Data Protection Regulation by not clearly outlining the legal basis for processing personal data to users. The DPC also found that Meta Ireland did not rely on consent as a lawful basis for processing personal data and instead relied on contract as the legal basis for processing personal data in connection with the delivery of personalized services. The DPC proposed substantial fines for Meta Ireland and directed the company to bring its processing operations into compliance within a short period of time.
Dave Bittner: The New York Times reports that Meta disputes the finding and intends to appeal the fines. It maintains its targeted advertising is properly respectful of GDPR, the EU's General Data Protection Regulation, and that the terms of service it asks its users to accept constitute proper consent to tracking. Litigation obviously isn't over, but online platforms should look to their terms of service. The large print giveth and the small print taketh away, as Mr. Tom Waits has taught us and as most fair-minded people understand. But still, there are going to be limits on what that long document is going to cover. You know that document - the one you impatiently clicked through and said you read it when you actually hadn't. That long document may not be enough to constitute a contract anymore.
Updates on the LastPass breach.
Dave Bittner: You've likely heard that password manager LastPass had been victimized in a data breach that included customer data, including password vaults. SecurityWeek reports that the breach occurred in August of last year, when hackers got into the LastPass network and returned later to hijack customer information. The threat actor is said to have copied a backup of customer vault data, which is said to contain both unencrypted data such as website URLs, as well as fully encrypted, sensitive fields, such as website usernames and passwords, secure notes and form-filled data. HackRead reports that the threat actor also stole technical data and source code from the development environment.
Dave Bittner: Almost Secure discusses the LastPass breach and disclosure, speculating that the near-holiday time of disclosure was not coincidental. Rather, they think, it may have been intentional to keep news surrounding the incident low. The disclosure, Almost Secure says, seems like LastPass' attempt to minimize potential litigation risk while also preventing drawing attention to themselves and causing a public outcry.
Dave Bittner: The British news site Which says that LastPass customers should ensure that their master password isn't used elsewhere and is more complex than the passwords they customarily use, as LastPass doesn't store master passwords and asserts that only brute force will allow threat actors access to users' master passwords. The news site Which states LastPass does not know users' master passwords, and they are not stored or maintained by LastPass. If you're a LastPass user, only you know your master password. The company describes this as its zero-knowledge architecture. The company also recommends changing passwords on websites that had stored passwords through the manager.
Update on Russian cyber activity against Poland.
Dave Bittner: The threat group Ghostwriter has resurfaced in phishing campaigns against Polish targets, according to authorities in Warsaw. BleepingComputer reports that the Russian hackers set up websites that impersonate the gov.pl government domain, promoting fake financial compensation for Polish residents allegedly backed by European funds. The goals of the campaign are believed to be intelligence collection and disinformation. The EU has linked Ghostwriter to Russia's GRU military intelligence service. Mandiant has also discerned a connection to Belarusian services. Ghostwriter has long specialized in impersonation, especially impersonation of NATO members located along the Atlantic Alliance's Eastern front, an area in which Russia takes a proprietary interest. The countries there are either former Soviet republics, like the Baltic states, former members of the Warsaw Pact, like Poland, or former provinces of the old Russian empire, like Finland. A very long historical memory is informing the Russian outlook on the special military operation.
Cellphones, opsec, and the Makiivka strike.
Dave Bittner: And finally, the Wall Street Journal reviews the mistakes that led to the Russian disaster in Makiivka, among them concentrated administrative troop billeting, storage of ammunition adjacent to the billets and generally poor operations security, manifested in undisciplined use of cell phones and failure to camouflage. The Journal quotes retired U.S. Army Lieutenant General Ben Hodges, a former commander of U.S. Army forces in Europe, as saying, the Russian military is not a learning organization. To learn, first you have to acknowledge that you were wrong, and that's not the culture. Phones collect a lot, whether the users think about it or not. Put down that smartphone, troop - or not. When officers don't care about their troops, the troops cease to care about the rules. And that's been the story of the Russian army in its war against Ukraine.
Dave Bittner: Coming up after the break, Malek Ben Salem from Accenture explains smart deepfakes. Our guest is Leslie Wiggins, program director for data security at IBM Security, on the role of the security specialist. Stay with us.
Dave Bittner: Leslie Wiggins is program director for the data security product management team as part of IBM Security. We spoke about the evolving role of the data security specialist and how people in that position collaborate with others in their organization.
Leslie Wiggins: So what led us to this point is the maniacal focus and value that sensitive data has - whether that's regulated or sensitive data, like intellectual property - and the fact that breaches, whether they are from external actors who are up to no good or whether they come from malicious insiders, are after that data, and because that data is so valuable. So a data security program - the ability to understand risk to that data, to be able to see how privileged users who should have access to that data or other people who shouldn't have access to that data are trying to access that data and to be able to take action to protect that data so that it isn't accidentally viewed by somebody who shouldn't see it, copied and removed from the organization or, you know, breached in any other kind of way.
Dave Bittner: And how does their role work within an organization? Are they generally collaborative, or do they end up being adversarial with certain groups? You know, where do they stand within the organization?
Leslie Wiggins: So I wouldn't say they're adversarial with certain groups. But historically, data security teams have been quite siloed and sort of focused within on their own activities and being able to produce that compliance report for an audit or being able to make sure that they have that real-time view of what's happening to sensitive data and automatically take action to protect it. But the piece that's been missing for a long time has been - for example, there is likely - almost always - a SOC within an organization, where security analysts are looking at the bigger picture of security across an organization. And for a long time, either data security pieces were not shared with that SOC or everything was shared in a language that those security analysts and the SOC did not understand.
Leslie Wiggins: So stuff might have been shared. And at that point, it was all being shared - too much sharing - and causing, you know, a bit of chaos in the SOC. So it would either tend to be one scenario or the other scenario. So what things have evolved to today is a much smarter and much more integrated sharing of data so that only things from the data security program that are of the highest risk - that are the most useful for that SOC and for that security analyst to have - now can get shared over with that environment. And it's shared in a simple way these days. They should, you know, be talking about the who, the what, the where, the when. It's like reporting - right? - or running the podcast. So the data is now - or the insight is now shared with the SOC in a language that the security analysts can understand as well.
Dave Bittner: What makes an ideal data security specialist - in terms of their background, their knowledge, their mindset, their disposition?
Leslie Wiggins: A patient disposition is something that they would need. But they tend - or they have tended in the past - to be very focused on compliance because that has been the thing that will cause an organization potentially to be fined if they fail an audit, for example, or struggle to meet a deadline for an audit. And so they have been very focused in the past on the bits and the connections and the environment and making sure they can get access to that data that's been stored for a year or two and put it together in an auditor-friendly way. And that takes a lot of patience. And historically, it's taken a lot of time because data security tools haven't been built in the past to retain data over long periods of time so that it's sort of hot and readily available to produce an audit report, for example.
Leslie Wiggins: So as the technology has changed so that, you know, we've all gotten, you know, more elastic and able to retain data for longer - now that that is a reality for a data security team, they don't have to spend their time so much on those back-end - hooking things up, finding the data, getting it together. They're able to now focus more on the data security side of the house, understanding, where is my data exposed? How exposed is it? Where should I be investigating an anomaly? Because there was something that happened within a data source that had - I can see it had a lot of classified data in it. It had a lot of database vulnerabilities. That was a really significant anomaly that occurred. It was, you know, maybe a SQL injection that's showing up. I should prioritize investigating that thing and making sure that data is protected and hasn't been breached or leaked somehow, rather than trying to cobble things together. So the - that role that we were talking about a minute ago of the data security specialist is changing to one where they can add more value and demonstrate even more value to the business.
Dave Bittner: And when you see organizations doing this right - doing it well - who's taking the lead here for the collaboration, typically?
Leslie Wiggins: Where that scenario works best is where you have savvy leadership that is bringing together the data security side and the SOC side to make sure that they are cross-pollinating and cross-sharing and being as efficient as possible across both of those pieces to better enable the whole and to better protect the business.
Dave Bittner: That's Leslie Wiggins from IBM Security.
Dave Bittner: And joining me once again is Malek Ben Salem. She's the managing director for security and emerging technology at Accenture. Malek, it is always great to welcome you back to the show. I want to touch base with you today on deepfakes and particularly this, I guess, is it fair to say a subset called smart deepfakes?
Malek Ben Salem: Yeah, absolutely. We've seen some new developments in AI in general, in the new advent of large language models and the advances in computer vision models that are driving, basically, a new category of deepfakes that we can call smart deepfakes. If you think about the advancements in chatbots and how interactive they became and how very, you know, plausible, the very real conversations and authentic conversations you can have with them, you can think of ways of creating deepfakes that are - that look much more real - right? - that you interact with over time.
Malek Ben Salem: So it's not just, you know, a video that you watch passively, but you can think of a deepfake that you interact with - you know, say, an avatar on the metaverse or whatever, right? But this is a persona that looks real - right? - with the right face, etc., that talks to you and that you interact with, but it's all and completely fake. So what's driving that is, No. 1, these, you know, chatbots or, you know, models that are built on large language models that are becoming very, very good. And the other advance is the ability of creating videos just by - fake videos, obviously - just through a prompt, right?
Malek Ben Salem: You can - there are models out there today where you can type in what you want to see in the video, and it will create the video for you completely. So you can say, I want a persona, or I want, you know, to see Dave doing this and that, right? And the video will be created for me. And you can make that video as long as you want it just by feeding it some more information about what you want to see in the video. So those two advances or those two trends, I think, will generate a sort of deepfakes that are very believable - that look very real and that are interactive in nature.
Dave Bittner: Yeah. So really, the convergence of those two things make this possible. Yeah, I could see this being used for some sort of advanced chatbot - you know, customer support, those sorts of things. But also, I suppose this could take phishing to the next level. You could get a - you know, a FaceTime call or a video call from your boss, and it might not be your boss.
Malek Ben Salem: Exactly. And that's the big threat. These deepfakes will be very believable. I mean, people - it will be very hard for people to recognize them as deepfakes. One of the things we've been training people on, you know, as deepfakes started, is to look at the face and the features of the face, etc. Then we started, you know, training people on looking at the context where the deepfake is located. But now, if these deepfakes are interactive - right? - if you can talk to them, that basically creates and takes things to the next level. It's hard to tell whether this is a deepfake video or not. And if you think about them being used in campaigns, you know, over time - so it may not be just one interaction with a conversation, but it could be, you know, multiple interactions over time to build that trust with the victim. Then, that becomes really, really hard to detect.
Dave Bittner: Yeah.
Malek Ben Salem: So it's - yeah, it's one more challenge we have to deal with.
Dave Bittner: Now, I've seen some demos of some output detectors for some of these AI models - the chatbot models - where folks have spun up a way you can feed in the output and it'll tell you - is this likely to be - it kind of gives you a little sliding scale of whether it thinks it's real or fake. Do we expect that sort of thing being applied to this as well? Is that a possibility?
Malek Ben Salem: I think we'd have to build that, but that's not going to be enough. I think relying on technical tools to detect these deepfakes - detective ways - right? - is not going to be enough. I think we need to focus more on, you know, building watermarking, let's say, these videos up front - watermarking the real and authentic content that we are creating. We have not done much of that. I think we need to increase media literacy amongst the population. We need to focus more on authentic journalism and reporting to have - to counterbalance this amount of misinformation and disinformation that we may be served in the future.
Dave Bittner: Wow. All right. Well, it's a lot to think about - a lot to ponder. I'm glad we have folks like you out there working on it. Malek Ben Salem, thanks so much for joining us.
Malek Ben Salem: My pleasure, Dave.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Liz Irvin, Rachel Gelfand, Tim Nodar, Joe Carrigan, Carole Theriault, Maria Varmazis, Ben Yelin, Nick Veliky, Milly Lardy, Gina Johnson, Bennett Moe, Catherine Murphy, Janene Daly, Jim Hoscheit, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, Simone Petrella, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.