Data breach at the US Marshals Service. Blind Eagle phishes in the service of espionage. Dish investigates its outages. Qakbot delivered via OneNote files. Memory-safe coding.
Dave Bittner: The U.S. Marshals Service sustains a data breach. Blind Eagle is a phish hawk. Dish continues to work toward recovery. OneNote attachments are used to distribute Qakbot. Ben Yelin has analysis on the Supreme Court's hearing on a Section 230 case. Mr. Security Answer Person John Pescatore has thoughts on ChatGPT. And CISA Director Easterly urges vendors to make software secure by design.
Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, February 28, 2023.
US Marshals Service sustains a data breach.
Dave Bittner: A data breach has been reported at the U.S. Marshals Service. NBC News correspondent Tom Winter broke the news in a tweet thread yesterday evening. Drew Wade, a Marshals Service spokesperson, said, the affected system contains law enforcement sensitive information, including returns from legal process, administrative information and personally identifiable information pertaining to subjects of USMS investigations, third parties, and certain USMS employees. The February 17 discovery of what Wade calls a ransomware and data exfiltration event affecting a standalone USMS system led to the disconnect of the affected system from the network. The USMS is actively investigating the attack as a major incident, BleepingComputer writes. Justice Department officials were briefed last Wednesday. The breach is said to have left the Witness Security Program, better known as the Witness Protection Program, untouched, USA Today reported in an update this morning.
Blind Eagle is a phish hawk.
Dave Bittner: BlackBerry has published a report on a threat actor, Blind Eagle, also known as APT-C-36. It's a South American cyber-espionage operation that's been operating against targets in Ecuador, Chile, Spain and Colombia since at least 2019. Its most recent activity has been directed primarily at organizations in Colombia, including health, financial, law enforcement, immigration and an agency in charge of peace negotiation in the country. The come-on in Blind Eagle's phishing emails depends upon fear and urgency. Recipients of the email are told they have obligaciones pendientes - that is, outstanding obligations - with some of the communications telling the recipients that their tax payments are 45 days in arrears. The email's phish hooks are usually malicious links. The phishing is conceptually simple. Blind Eagle has persisted with it simply because it works.
Dish continues to work toward recovery.
Dave Bittner: Dish continues to grapple with what it characterizes as an internal system error. The Record notes that no specific information has so far come to light that would support early speculation that the incident arose from a cyberattack. TechCrunch has been in touch with the company, who said that Dish TV, Sling TV, and wireless service were all back up. Investigation and remediation are in progress. “However," a spokesman said, "some of our corporate communications systems, customer care functions and websites were affected,” said Wietecha. “Our teams are working hard to restore affected systems as quickly as possible and are making steady progress.” Dish's website this morning was still displaying the notice it's had up since the weekend - we are experiencing a system issue that our teams are working hard to resolve.
OneNote attachments used to distribute Qakbot.
Dave Bittner: Armorblox describes a phishing campaign that's using OneNote file attachments to distribute the Qakbot banking Trojan. The phishing emails purport to come from a trusted vendor and ask the recipient to open a OneNote attachment that appears to be an invoice. Armorblox says, upon opening the email, victims are presented with a simple-bodied email designed to look like a follow-up to a previous discussion. As victims read this language-based email, they are prompted to open the attachment to review the details of the order to which it seems has already been completed. The file will then execute VB Script code, which will result in the installation of Qakbot.
More perspective on Russia's hybrid war.
Dave Bittner: The "SpyCast" podcast has an interview with The Washington Post's Shane Harris, who encapsulates how conventional wisdom about Russia's hybrid war went astray. He said, at the outset, I believe that what we were looking at was probably a pretty swift Russian victory. They would come in. They would decapitate the central government in Kyiv in the first 72 hours. And it would be bloody, and it would be violent, but that Russia would prevail because they were deemed to have the superior military in terms of technology experience numbers. Turns out all those things were spectacularly wrong. The same goes for cyberspace. Check out "SpyCast" on the CyberWire network and hear more about the conduct and prospects of Russia's war.
CISA Director Easterly urges vendors to make software secure-by-design.
Dave Bittner: CISA Director Jen Easterly spoke yesterday at Carnegie Mellon University and outlined steps she urged vendors to take in order to introduce more inherent security into their products. One of her conclusions was that the burden of security shouldn't fall on the consumer. Since she was speaking at a university, she framed the issues in ways that might suggest ways in which advanced students might shape their studies and research to contribute. In particular, she offered four questions that are worthy of more general consideration. First, she asked, could you move university coursework to memory-safe languages? As an industry, we need to start containing and, eventually, rolling back the prevalence of C and C++ in key systems and putting a real emphasis on safety. Second, could you weave security through all computer software coursework? Third, how can you help the open-source community? And finally, could you find a way to help all developers and all business leaders make the switch? So memory-safe coding is a technical, practical and business issue. It will take a push across all those areas to make software safer and more secure.
Dave Bittner: Coming up after the break, Ben Yelin has analysis on the Supreme Court's hearing on a Section 230 case. Mr. Security Answer Person John Pescatore has thoughts on ChatGPT. Stay with us.
Unidentified Person #1: Mister.
Unidentified Person #2: Security.
Unidentified Person #3: Answer.
Unidentified Person #4: Person.
Unidentified Person #1: Mister.
Unidentified Person #2: Security.
Unidentified Person #3: Answer.
Unidentified Person #4: Person.
John Pescatore: Hi. I'm John Pescatore, Mr. Security Answer Person. Our question for today's episode - you spent a lot of time as a Gartner analyst. If you were doing a Gartner Cybersecurity Hype Cycle today, where would you put the OpenAI ChatGPT chatbot that is getting so much press? Well, that's a timely question. I actually just used the ChatGPT chatbot, via The New York Times, to write my wife a romantic Valentine's Day card in the style of a pirate. She was not impressed. Next year I will go back to buying her roses. OK, let me do some 'splaining first.
John Pescatore: Unless you've been totally off the grid, you've probably heard some level of hype about OpenAI and ChatGPT. If not, Google it for detailed information. But it is essentially an example of what is called generative AI. Here's the one-line explanation the consulting firm McKinsey published for corporate executives. Generative AI describes the algorithms, such as ChatGPT, that can be used to create new content, including audio, code, images, text, simulations and videos. One more short definition for those not familiar with Gartner Hype Cycles, which Gartner started in 1995, and one of the more fun Gartner research notes I did over my 14 years there. A Gartner Hype Cycle tracks and predicts technology issues from inception, or trigger point, to peak of overinflated expectations into the trough of disillusionment, then up the slope of enlightenment and, for some, but not all, to reach the plateau of productivity. In August 2022, the Gartner Emerging Technologies Hype Cycle had generative AI at that initial trigger point.
John Pescatore: Over the years, AI. has mostly been trapped in the trough of disillusionment. But ChatGPT actually passed the Turing test, fooling human readers into thinking they were chatting with another human. A public release of a website last November demonstrating the technology in various ways has led to an explosion of hype. From a cybersecurity perspective, there are two major things to think about. One, how it would be used against us? But also, two, how can we use it against the bad guys? First, a telling point to internalize - the workflow of AI is always, one, human experts enter constraints and requirement; two, AI, lines of code, mostly written by humans, creates a bunch of stuff; and then, three, humans evaluate and select the useful stuff.
John Pescatore: Already, you can see how ChatGPT can be used to make it much easier to craft more real-sounding phishing messages and even simple malicious executables. This is much the way cloud computing made it easier for bad guys to launch distributed-denial-of-service attacks. But cloud-based DDoS also made it easier to block the DOS, and in that case, generative AI is going to follow that same trend because in the hands of skilled cybersecurity folks, it will be useful for faster generation of IOCs that are more than just glorified signatures and also more useful tools for recognizing phishing text and malware created by generative AI. Imagine if on the good-guy side, software development and pipeline platforms used generative AI to make sure all code did not contain any of the OWASP top code or API vulnerabilities before allowing check-in of that software. That would be some real movement up the slope of enlightenment.
John Pescatore: So to finally directly answer your question, today I put generative AI used by bad guys at the peak of overinflated expectations, and it's used by good guys just starting off from the trigger point. As in chess, the bad guys have the white pieces and usually get to go first. But the first mover in chess does not always win. It is really only a slight advantage, and difference in skills between players is the more accurate determinant of who will most likely win. The bottom line - like all technology, generative AI can be a force multiplier when skilled experts put it to use, or it can simply be a noise generator when unskilled users are at the controls. Use the hype over Open ChatGPT to make sure your management understands the need for machine language understanding and skills in your security staff. Also, update your security awareness materials to users to emphasize that caution in clicking should be based on the consequences of the action, not just the believability of the email. My prediction is that even as fast as things seem to be moving, in February 2024 we will probably not be using generative AI to send our significant others Valentine's Day messages, or they will not be our significant others in
Unidentified Person #1: Mister.
Unidentified Person #2: Security.
Unidentified Person #3: Answer.
Unidentified Person #4: Person.
John Pescatore: Thanks for listening. I'm John Pescatore, Mr. Security Answer Person.
Unidentified Person #1: Mister.
Unidentified Person #2: Security.
Unidentified Person #3: Answer.
Unidentified Person #4: Person.
Dave Bittner: Mr. Security Answer Person with John Pescatore airs the last Tuesday of each month right here on the CyberWire. Send your questions for Mr. Security Answer Person to questions@thecyberwire.com.
Dave Bittner: And joining me once again is Ben Yelin. He is from the University of Maryland Center for Health and Homeland Security and also my co-host over on the "Caveat" podcast. Hello, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: I know you recently spent just a scintillating afternoon listening to Supreme Court oral arguments in the Gonzalez v. Google case, which has to do with Section 230 here. Before we jump into what the Supreme Court had to say, just a quick overview. What is at stake here, Ben?
Ben Yelin: So there are actually two cases here, Gonzalez and Taamneh v. Twitter. For legal purposes, the cases are identical. It's victims or the families of victims of terrorist attacks suing online platforms for aiding and abetting terrorism through their use of algorithms. The Twitter case turns more on the specific definition of aiding and abetting, which is not as relevant for our purposes. So that's why we're focusing on Gonzalez. v Google, which is really about how far immunity under Section 230 extends to the activities of these Big Tech platforms. So the allegation on behalf of Gonzalez's family - Gonzalez was a young lady who was killed in the 2015 terrorist attacks in Paris - is that YouTube and its parent company Google bear some responsibility for these acts of terrorism because of their algorithm that recommends videos. When you search ISIS videos on YouTube and you watch one of them, YouTube will actively recommend - at least this is the allegation - the next video based on what you've already watched, and in that respect, they are aiding and abetting terrorists. Now, Section 230 provides immunity to these companies for third-party content posted on the website. So both parties agree that you can't sue Google or - YouTube or Google, as its parent company, for the fact that ISIS videos exist on YouTube.
Dave Bittner: OK.
Ben Yelin: But the argument here is, can you sue them for the sort of recommendation scheme? And that turns on the question as to whether in recommending these videos, YouTube is acting simply as a publisher and is just organizing the videos in kind of a content-neutral way, or if this is an act of creative content, this is something that YouTube itself has created. The counsel for Gonzalez argued that the specific thumbnails that are created for these recommended videos are a mixed creation. It's the third party that has created the video, but it is Google and YouTube that have created the thumbnail and that they should be liable or they should not have immunity under Section 230 because they created that thumbnail.
Dave Bittner: And put it in front of the viewer.
Ben Yelin: And put it in front of the viewer through their algorithm.
Dave Bittner: OK.
Ben Yelin: The justices were very skeptical of that argument, I think, for both legal and practical reasons. The practical reasons is that all of these tech companies would then panic about any algorithmic decision that they'd make, including ones that seem completely innocuous. So one of the examples they gave is, what if in a search engine Google simply organized the results not by any algorithm, but alphabetically? If there were no immunity shield, people like me with the last name of Yelin could sue for economic damages because my name always turned up last in the search. And they think that that would be a bad result for these internet companies. It would stifle creative content, et cetera. So they are very wary about cutting against Section 230 immunity for something that, at least to a layperson, doesn't seem like content that Google itself created.
Ben Yelin: The counsel for Google made an argument that was similarly poorly received by many of the justices. Their argument is that not only should that content-neutral algorithm, where you're simply creating an algorithm based on the videos that somebody has watched, not only should that still confer immunity on the company, but even in an extreme example where YouTube designed an algorithm specifically to promote terrorism, to promote ISIS videos, even in that extreme circumstance, there should be a liability shield because it's still just third-party content. Even if you are designing an algorithm that promotes ISIS videos, it's ISIS itself that created the videos, and therefore, Google shouldn't face any sort of legal consequences, even in that extreme circumstance. And the justices were pretty skeptical of that argument as well. I think they are trying and sometimes were asking really probing questions to determine where that line is.
Dave Bittner: Yeah. What did we hear from any of the individual justices here?
Ben Yelin: So Justice Gorsuch and Kavanaugh, I think, were particularly concerned about the practical effects of ending this immunity and what it would do for the industry. And so if I had to guess, they're going to come down more on the side of broad immunity for these Big Tech platforms. And that's really the status quo. Lower-court cases have held that immunity under Section 230 extends to a lot of the sort of organizing activities that these platforms engage in when they're deciding which videos to put at the top of the list, right? Justice Jackson, who is the newest justice and one of the more liberal justices, I think is going to go in the other direction. She was taking a very textualist approach and was looking at the original purpose of Section 230, which concerns taking down third-party content or decisions about whether to take down third-party content. And since this case, when we're talking about algorithms and recommendations, doesn't relate to a direct decision about removing third-party content, I think she would not have Section 230 immunity apply to these types of activities. So I think she would be one vote in favor of Gonzalez.
Ben Yelin: The remaining justices are kind of in the murky middle where, through really interesting, I think, intelligent questions, were trying to engage in a line-drawing exercise, and they did it through a bunch of different hypotheticals. So with the attorney for Gonzalez, they were talking about a scenario in which somebody goes into a bookstore and asks for a book related to sports, and they're directed, based on that question that they ask, to a table full of books about sports.
Dave Bittner: Yeah.
Ben Yelin: If this were an internet transaction, would that confer immunity on the equivalent of the bookstore here? And this really goes back to some of the original algorithms we saw in the '90s, like with Amazon. You bought this; will you like that? And so I think justices were skeptical of not extending immunity to those very basic publishing functions of - here's something we think you want to see, not based on our own ideological desire of what we want you to see but based on what you have previously searched for. But there are kind of a parade of hypotheticals on the other side, too, the main one, which I already discussed, is what if Google created an algorithm that specifically promoted terrorism? Justice Sotomayor came up with, I think, a really good hypothetical that was very difficult for the attorneys to answer. What if there was a dating site that created a discriminatory algorithm, so they wouldn't match Black users with white users, for example? Would that dating service have immunity? Because, ultimately, it's the third parties, the people who created the profiles, who have submitted the content.
Dave Bittner: Right.
Ben Yelin: The dating service would just be engaging in that kind of organizational publishing function. So I think the justices in the middle were having a really hard time of figuring out that exact line between acting as a publisher and acting as the creator of content. So it makes it really difficult to handicap where they're going to come down in this case.
Dave Bittner: Well, time will tell, and we certainly will keep an eye on it. Ben Yelin, thanks for joining us.
Ben Yelin: Thank you.
Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at thecyberwire.com. The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliott Peltzman. The show was written by John Petrik. Our executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.