The CyberWire Daily Podcast 1.9.23
Ep 1735 | 1.9.23

Social engineering shenanigans, by both crooks and spies. Suing social media over alleged mental health damages. And how to earn an “F.”

Transcript

Dave Bittner: A Telegram impersonation effects a cryptocurrency firm. Phishing with Facebook termination notices. Russian phishing continues to target Moldova. The IEEE on the impact of technology in 2023. Glass ceilings in tech leadership. Seattle schools sued social media platforms. Malek Ben Salem from Accenture explains coding models. Our guest is Julie Smith, identity security leader and executive director at IDSA, with insights on identity and security strategies and dealing with the implications of ChatGPT.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Monday, January 9, 2023. 

Telegram impersonation affects cryptocurrency firm.

Dave Bittner: Happy Monday, everyone. Great to have you all join us here today. SafeGuard Cyber this morning released a report detailing an observed instance of impersonation of a cryptocurrency firm in Telegram that may have been the activity of threat actor DEV-0139. In December 2022, Microsoft released research around a threat actor they've tracked as DEV-0139. The malicious actor is said to have joined Telegram groups used to facilitate communication between VIP clients and cryptocurrency exchange platforms and identified their target from among the members. The threat actor posed as representatives of another cryptocurrency investment company and in October 2022 invited the target to a different chat group and pretended to ask for feedback on the fee structure used by cryptocurrency exchange platforms. An Excel file sent by the actor contains malicious macros. 

Phishing with Facebook termination notices.

Dave Bittner: Avanan released a report this morning detailing a phishing campaign impersonating Facebook for credential harvesting. The attack begins with an email appearing to be from Facebook saying that the victim's account had been suspended for violations of community standards. They're told they have the ability to appeal the decision within 24 hours or face permanent account deletion. The threat actor provides a link, which in actuality leads to a credential harvesting page, even though it appears to be from Meta. The threat actor made the credential harvesting link believable, and the name of the victim's actual page was included in the email contents. Playing on Urgency, this attacker hopes that the victim views a quick appeal to preventing an impending loss of their account as reasonable. The sender's email address, however, did not appear to come from Facebook, rather a Gmail account. Alas, the wicked fleeth where no man pursueth. If someone wants you to do something now, now, now, well, maybe it's better to do it never, never, never and bong that bozo to the spam folder. 

Russian phishing continues to target Moldova.

Dave Bittner: Since Russia's invasion of Ukraine, Moldova has felt more uneasy than any other country in the near abroad except Ukraine itself. There are too many parallels to Ukraine's situation for comfort. Like Ukraine, Moldova has received hostile Russian attention in cyberspace. Ukraine has seen factitious liberation movements seen to detach Donetsk and Luhansk. Moldova has an even longer history of Russian-sponsored secession in Transnistria. The record reports that Moldova's government has over the past week seen a surge in phishing attempts seeking to compromise official and corporate networks. These efforts have been accompanied by impersonation campaigns that misrepresent themselves as communications originating with senior Moldovan officials. 

Glass ceilings in tech leadership.

Dave Bittner: A couple of items of selected reading for your consideration today. Connie Stack is CEO at security firm Next DLP, and she recently shared her thoughts in our monthly women in cybersecurity newsletter, Creating Connections. You can find a link to the newsletter and her article "Breaking the glass ceiling: My journey to close the leadership gap" in today's selected reading section of the show notes.

IEEE on the impact of technology in 2023.

Also in the show notes, we have a link to the IEEE Impact of Technology in 2023 and Beyond study. We hope you'll check them out. 

Seattle Schools sue social media platforms.

Dave Bittner: Seattle Public Schools has filed a lawsuit against the parent companies of TikTok, Instagram, Facebook, YouTube and Snapchat, claiming that the social media platforms have driven a rise in mental and emotional health issues among youth. The Seattle school district said in a statement that excessive social media use is harmful to young people, and social media companies have intentionally crafted their products to be addictive. 

Dave Bittner: Quoting the statement, "most youth primarily use five platforms - YouTube, TikTok, Snapchat, Instagram and Facebook - on which they spend many hours a day. Research tells us that excessive and problematic use of social media is harmful to the mental, behavioral and emotional health of youth and is associated with increased rates of depression, anxiety, low self-esteem, eating disorders and suicide. The evidence is equally clear that social media companies have designed their platforms to maximize the time youth spend using them, and addict youth to their platforms, as alleged in the complaint. These companies have been wildly successful in attracting young users. As of last year, almost 50% of teenagers in the state spent between one and three hours a day on social media, and 30% averaged more than three hours a day." 

Dave Bittner: The statement added that school districts lack the resources to keep up with the demand for mental health care, stating, school districts like Seattle Public Schools have been significantly impacted by the resulting crisis. Like school districts across the country, Seattle Public Schools' schools and school-based clinics are one of the main providers of mental health services for school-aged children in the community. But the school counselors, social workers, psychologists and nurses need greater resources to meet the high demand for services. 

Dave Bittner: Naturally, social media outfits don't think they're the villains here, and in truth, it is a tough problem. According to the AP, Snapchat's parent company, Snap, responded in a statement outlining the measures it's taken to provide mental health resources to users, stating, we will continue working to make sure our platform is safe and to give Snapchatters dealing with mental health issues resources to help them deal with the challenges facing young people today. Jose Castaneda, a spokesman for Google, YouTube's parent company, pointed to various parental controls available on YouTube, stating, we have invested heavily in creating safe experiences for children across our platforms and have introduced strong protections and dedicated features to prioritize their well-being. 

Dealing with the implications of ChatGPT. 

Dave Bittner: And finally, there have been all sorts of reports of the misuse, both actual and potential, of ChatGPT by various miscreants. Social engineering at scale, more alluring catphishing, even the automation of malware coding are all being reported. But we'll concentrate on what ChatGPT seems to mean in the ongoing range war between academic integrity and technological advance. The New York City Department of Education has banned ChatGPT on school devices due to concerns about plagiarism. Vox notes that the chatbot is able to write decent essays that can pass popular anti-plagiarism tools. The Daily Beast reports that students are already using the AI to complete writing assignments. Even if the service is technically banned by schools, it's difficult to see how such a ban could be enforced. 

Dave Bittner: Princeton student Edward Tian attempted to offer a solution to this dilemma by creating an app called GPTZero, designed to detect if an essay was written by a human or an AI. The Daily Beast explains that GPTZero uses perplexity and burstiness as metrics. Perplexity is a measurement of randomness in a sentence, and burstiness is the quality of overall randomness for all the sentences in a text. Human-written sentences generally vary in complexity, while bots usually create sentences that are consistently low-complexity. Edward Tian has already been approached by major venture capital firms interested in his product, and he acknowledges the usefulness of artificial intelligence in the right situations. But he notes that there are beautiful qualities of human-written prose that computers can and should never co-opt. 

Dave Bittner: Go Tigers, we say, and go Goedel, who proved that any deductive system, at least as complex as the arithmetic of the natural numbers, was either incomplete or inconsistent. That is, there are true theorems that can't be derived from any finite set of premises. If you can derive them all, then your deductive system is either trivial or just freaking wrong, or so our logicians' desk tells us. Why don't you ask ChatGPT - extra credit if you ask for an answer in the style of Yogi Berra or Don King or The Dude from "The Big Lebowski." And, in the meantime, it occurs to us that Kurt Goedel was also at Princeton. So go Tigers, again. It's not for nothing you've got those cannonballs stuck in the walls of Nassau Hall. 

Dave Bittner: After the break, Malek Ben Salem from Accenture explains coding models. Our guest is Julie Smith, identity security leader and executive director at IDSA, with insights on identity and security strategies. Stay with us. 

Dave Bittner: Julie Smith is identity security leader and executive director at IDSA, the Identity Defined Security Alliance, which is a nonprofit founded by a group of identity and security vendors, solution providers and practitioners that, in their words, acts as an independent source of thought leadership, expertise and practical guidance on identity-centric approaches to security for technology professionals. They recently published a report, Tracking Trends in Securing Identity, and that's where my conversation with Julie Smith began. 

Julie Smith: According to our research - and this is the second year that we've published a trends report - 84% of organizations have experienced an identity-related breach. And that's sort of an astounding number, really. And in most cases, it has resulted in, you know, disruption or loss of revenue or costs associated with remediation. And, you know, at the same time, we're finding that 96% of organizations, you know, look back and say, well, those could have been prevented had we really put a focus on identity management, identity and access management and things - just basic things - and especially if you look in the headlines - like putting in place multifactor authentication. Yes, you know, there have been some high-profile breaches lately that have exploited that. But it does put a barrier up in front of the bad guys. 

Julie Smith: Another key area that organizations haven't focused on but need to is just the deprovisioning of accounts. So when an employee leaves the organization, what we found is that about half the organizations out there are deprovisioning those accounts on the day that employee leaves. But only 26% of them are doing it regularly. So, you know, just these accounts that may have extended privileges are floating around. And if someone gets a hold of that account, they've got valid credentials. They can log in, and they can do bad things. 

Dave Bittner: When folks are coming up short with this, what are the typical explanations for it? Is it a lack of funding or attention? Or why are we not hitting where we should be here? 

Julie Smith: I think it is a bit of a lack of attention. Organizations are now prioritizing 64% - again, back to the research, 64% have identified identity within their top three security priorities. But I think that's relatively new. In the past, identity management has been more about granting access and getting employees or even potentially partners accessing resources so that they can be productive. And it's been considered an operational function in the past. And just now within the last couple of years, I would say that there is the cybersecurity focus on it and even to the point where it's becoming a board-level topic. 

Julie Smith: And individuals have so many different logins, passwords and usernames and passwords that they deal with not just on a personal side but on a professional side as well. And we found that people are not taking and not protecting - not taking care, not protecting those credentials - and, you know, whether it's sharing usernames and passwords, whether it's reusing passwords across both their personal and professional accounts, there's just some basic things that I think we as individuals can do not just to protect our personal identity but also our employer identity and employer infrastructure as well. So we kind of think of it as identity is everyone's responsibility. 

Dave Bittner: What are some of those things that folks can do that are easy to implement? 

Julie Smith: Yeah. I think from an organization perspective, you know, I mentioned MFA. That's top of mind. An MFA for all user types - so not just your employees but also, you know, we're seeing certainly a lot of organizations are starting to implement it for their customer-facing applications. You certainly need to do that for third-party access as well and staying on top of privilege access. You know, as individuals move around the organization, privileges can creep, if you will. And, you know, thinking about it from a least-privileged perspective - so only give people the level of access that they absolutely need to do their job and then staying on top of those changes to access as organizations - or as individuals move around the organization. 

Julie Smith: If you experienced an anomaly or believe that there's something going on that's suspicious, you know, revoke that access immediately. Now, there's always challenges - if it's the CEO, for example. But in some cases, it's better to remove access for an identity that maybe is not behaving the way you would expect it to. A lot of organizations are now looking at the characteristics of a device as well to determine whether that device has been compromised or not. 

Dave Bittner: When you look towards the future, toward the horizon, where do you suppose we're headed here? Are we someday going to shed usernames and passwords and move on to something more secure? Where do you suppose we're heading? 

Julie Smith: I think we are definitely headed in the right direction from a passwordless perspective. You know, there's standards being put forth by the FIDO Alliance, for example, that helps with passwordless strategies. I think there is a tremendous amount of infrastructure, however, that, you know, organizations have built up over time. And there's an awful lot of technical debt and things that they need to be able to provide access to. You know, unfortunately, not everything is in the cloud at this point in time. So I think we're definitely headed in the right direction with passwordless, and hopefully we can get rid of usernames and passwords sometime in the near future. 

Dave Bittner: That's Julie Smith. She's executive director at IDSA, the Identity Defined Security Alliance. 

Dave Bittner: And I am pleased to be joined once again by Malek Ben Salem. She is the managing director for security and emerging technology at Accenture. Malek, it is always great to welcome you back to the show. I want to touch base with you today on some of the things that's going on with coding and some of the coding models that folks are taking advantage of. What can you share with us today? 

Malek Ben Salem: Yeah. So, Dave, with the advent of GPT-3 two years ago, you know, that was the one model that was used - a large language model that we used to generate language and - you know, English language, and then it was expanded to other languages, etc. We've seen similar models being trained to write code - the same approach. It could be, you know, Java code, JavaScript code, etc. A number of coding models have been created for various programming languages. And so these models are being used to help developers write code. They can be deployed in - made available to developers to help them basically predict the next word in their code and help them complete, you know, the function or the code line. Or in some cases, they have been at least tested to write code completely autonomously. 

Malek Ben Salem: And I think the reason I want to talk about this topic is for clients who are considering using these coding models, I think, you know, they have proven that they can bring efficiencies when they are being used as programming pairs, if you will, with the developers. But they are not as effective if they work autonomously. And we've seen, you know, several deployments or several studies that have demonstrated that they can bring these efficiencies if we let the human in the loop stay in the loop and if we let the human review the code before it gets deployed. 

Dave Bittner: Yeah. It makes me think about, you know, organizations that are required to include SBOMs - you know, software bill of materials. And how does this play into that reality? 

Malek Ben Salem: Yeah - very good question. I think for the companies who are thinking about building their own coding models - and, you know, some clients or some organizations are thinking about that - it's very important to think about the quality of the code that is being trained these models. If that code is not written in a secure manner, if that code is being used to train the model and we don't know the quality of the code and we don't know that it's safe, then you end up with a model that is compromised, that may write and spit out code that has security vulnerabilities. It's not a software - it's SBOM, but now the data basically is your input. That coding data that is being used to train the model is - carries the vulnerability inherently within it. And another way is if these models - you know, if people start using these coding models that are available to them without knowing enough about how they - you know, how they have been trained, then that presents another type of risk. And it's not just a security risk. That exists, obviously. But also, there are potential legal risks about how to use this model. Many of these models have been trained with open-source code or code in open-source repositories. And, you know, you can think about - the question is, is that a fair use of the coding model... 

Dave Bittner: Right. 

Malek Ben Salem: ...Once - you know, once it was used with code that is open to the public? Is it a fair use, or is there not enough in the coding model to justify that label of the model as a fair-use model? I think there are legal risks that have not been clarified or that have not been - that are - that do exist that organizations have to be aware of before they start adopting these types of coding models. 

Dave Bittner: Yeah. This is fascinating to me because I wonder to what degree can the AI copy someone else's code or to what degree can it be inspired by someone's code. Is it capable of a creative act? 

Malek Ben Salem: Exactly, exactly. And I don't think we know enough about the - how these types of models have been trained to know or to make an assessment whether it's a creative act or whether it's a copying act, if you will. 

Dave Bittner: Yeah. 

Malek Ben Salem: And we did not have any cases in courts that we can use as a reference, you know, to even guide us in that assessment. 

Dave Bittner: So what's your advice for folks who are thinking of wading into these waters - any words of wisdom here? 

Malek Ben Salem: Generally, it's probably more cautious to train your own model using your own code so you know that you own the code and you know the derivative model out of that code. But also, make sure that it is trained using code that is secure, that does not - obviously does not have any bugs but also does not have any security vulnerabilities. Otherwise, you'll keep recreating those security code vulnerabilities in the code generated by these models. 

Dave Bittner: Yeah. Every time we push something out to production, it's full of back doors. I can't understand it. 

(LAUGHTER) 

Malek Ben Salem: Oh, yeah. It's the - yeah - same issue we have to deal with. So... 

Dave Bittner: (Laughter). 

Malek Ben Salem: The more you invest on security upfront, the better you are. 

Dave Bittner: Yeah. All right. Well, Malek Ben Salem, thanks so much for joining us. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. Don't forget to check out the "Grumpy Old Geeks" podcast, where I contribute to a regular segment called Security, HAH. I join Jason and Brian on their show for a lively discussion of the latest security news every week. You can find "Grumpy Old Geeks" where all the fine podcasts are listed. 

Dave Bittner: The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tre Hester with original music by Elliott Peltzman. The show was written by John Petrik. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.