The CyberWire Daily Podcast 1.31.23
Ep 1750 | 1.31.23

The cybercriminal labor market and the campaigns it’s supporting. Russia’s Killnet is running DDoS attacks against US hospitals, but Russia says, hey, it’s the real victim here.


Dave Bittner: Some perspective on the cybercriminal labor market. DocuSign is impersonated in a credential-harvesting campaign. Social engineering pursues financial advisers. Killnet is active against the U.S. health care sector. Mr. Security Answer Person, John Pescatore, has thoughts on cryptocurrency. Ben Yelin and I debate the limits of Section 230. And hey, who's the real victim in cyberspace? Here's a hint - it's probably not Mr. Putin.

Dave Bittner: From the CyberWire studios at DataTribe, I'm Dave Bittner with your CyberWire summary for Tuesday, January 31, 2023. 

Perspective on the cybercriminal labor market.

Dave Bittner: So imagine you're a lowlife looking for a career in the go-go world of cybercrime. Not that you are or, of course, that you would be, but pretend for a minute that you were. Where are you going to go? Where are the want ads? A study by Kaspersky describes the criminal labor market. Think of it as like Indeed or for the cybercriminal class. Kaspersky analyzed long-term and full-time job listings on 155 dark web forums from January 2020 through June 2022. They found a high density of posted ads in March of 2020, suspected to be so because of the pandemic and the changing nature of the labor markets. Hackers and APT groups are found to be the key employers, often looking for developers, who comprise 61% of the total job listings. 

Dave Bittner: The highest salary shown for a developer was listed as $20,000 a month, though the median pay for the listings average between $1,300 and $4,000 a month for most IT professionals, with the highest pay going to reverse engineer positions. That's the careerist stuff. The money mules and the others who do the grunt work for the bosses - that's more like the gig economy. But steer clear of the underworld, friends. Stay in school and stay out of trouble. And if you're in the U.S., well, NSA is hiring, and they like to bring you in young. Why is that? Well, for one thing, you're easier to clear - before you've acquired the crust of bad judgment and erratic behavior your elders are all schlepping around with them. 

Study: rapid technology implementation accompanied by application security risk.

Dave Bittner: Cisco AppDynamics has published a report looking at the increase in application security risks over the past several years. A survey they conducted found that 89% of technologists report that their organization has experienced an expansion in its attack surface over the last two years, and 46% state that this is already presenting increasing challenges. Most respondents believe the main reason for this increase is the rapid adoption of IoT devices, migration to the cloud and the dramatic increase in hybrid workplaces as remote work became more normal during the pandemic. Additionally, 92% of respondents admit that the rush to rapidly innovate and respond to the changing needs of customers and users during the pandemic has come at the expense of robust application security during software development. So make haste, innovators, but do it with deliberate speed and not in a mad rush. 

DocuSign impersonated in credential-harvesting campaign.

Dave Bittner: Cybersecurity firm Armorblox this morning detailed a new phishing campaign in which the hackers purport to be from DocuSign in an attempt to harvest credentials. The campaign begins with an email appearing to originate from DocuSign with the subject line reading, "Please DocuSign: Approve Document 2023-01-11." The phishing email sender's name simply reads DocuSign, although neither sender's email address nor its domain shows any connection with the legitimate DocuSign service. That mismatch, by the way, is one of the typical signs that betray a phishing attempt. The phish requests the review and signature of a document. If clicked, the view-completed document button redirects to a malicious webpage. The page appears to be a Proofpoint login screen, though, in actuality, if you are incautious enough to enter, your login credentials would be harvested. The language in the subject line of the email instills a sense of urgency in the victim. Both DocuSign and Proofpoint's legitimacy are being leveraged by the attackers to instill trust in those targeted. The accurate emulation of a DocuSign workflow also increased trust and likelihood of successful interactions for hackers, and the urgency of the request is intended to cloud your mind enough to swallow the bait. 

Social engineering pursues financial advisors.

Dave Bittner: Researchers at DomainTools describe another instance of the fraud technique known as pig butchering, in which a threat actor poses, in this case, as a financial adviser in order to build trust with the victim. Eventually, the scammer convinces the victim to invest in a phony cryptocurrency or other fraudulent venture. The researchers outline one of these scam campaigns based in West Africa that's targeted several hundred financial advisers. The attackers use LinkedIn and other professional networking services to research and contact their targets. They also advertise their services on TikTok, Instagram and other social media platforms. 

Dave Bittner: The scammers set up professional-looking websites, which are often modified versions of legitimate financial adviser pages. They use bulletproof hosting providers so their sites won't be taken down during the course of these lengthy scams. The attackers use live chat widgets on the sites to talk to their victims, then move the conversation to email or WhatsApp. They generally try to avoid talking to the victim over the phone, probably because the imposture is more obvious when there's a voice on the other end of the line. 

Killnet is active against the US healthcare sector.

Dave Bittner: At least 14 U.S. medical centers - among them, Duke University Hospital in North Carolina, Stanford Health Care and Cedars-Sinai in California, University of Pittsburgh Medical Center and Jefferson Philadelphia in Pennsylvania - were hit by distributed denial of service attacks yesterday, according to the Carolina Journal. The incidents are being attributed to the Russian cyber-auxiliary Killnet. The American Hospital Association warned its members yesterday that the hacktivist group Killnet has targeted the U.S. health care industry in the past and is actively targeting the health and public health sector. The group is known to launch DDoS attacks and operates multiple public channels aimed at recruitment and garnering attention from these attacks. This week's DDoS attacks seem to have been quickly contained and mitigated, which has normally been the case with earlier Killnet actions. 

Dave Bittner: An alert issued by the U.S. Department of Health and Human Services Health Sector Cybersecurity Coordination Center assessed the implications of the threat, stating, Killnet has been using publicly available DDoS scripts and IP stressers for most of its operations. These tools have been on offer for some time in the criminal-to-criminal underground markets. Law enforcement organizations have been able to take down some of those services and indict some of the operators. But HC3 cautions that the threat's far from over, stating, despite this success, it remains unknown if and how this law enforcement action might impact Killnet, which turned its DDoS-for-hire service into a hacktivist operation earlier this year. Furthermore, it is likely that pro-Russian ransomware groups or operators, such as those from the defunct Conti Group, will heed Killnet's call and provide support. This likely will result in entities Killnet targeted also being hit with ransomware or DDOS attacks as a means of extortion, a tactic several ransomware groups have used. 

Russia insists that it's the real victim here.

Dave Bittner: And finally, we hear a lot about the virtual mayhem Russian criminals and intelligence services work around the world. We just went over some of Killnet's works. In fairness, there's another side to the story. It's not a very plausible side, but it is another side. 

Dave Bittner: TASS presents a very different picture of the cyber phases of Russia's hybrid war. Russia's deputy foreign minister says that the real victim is Russia, that what the Kremlin has taken to calling, the collective West, is behind it, that Ukraine has lost its independence, which, presumably, Russia's aggressive war is out to restore, and has become nothing more than a jumping-off point for cyber and other attacks that the collective West is running against a beleaguered Russia. That's one way of looking at it. 

Dave Bittner: Coming up after the break, Mr. Security Answer Person John Pescatore has thoughts on cryptocurrency. And Ben Yelin and I debate the limits of Section 230. Stick around. 

Computer-generated Voice #1: Mr. 

Computer-generated Voice #2: Security. 

Computer-generated Voice #3: Answer. 

Computer-generated Voice #4: Person. 

Computer-generated Voice #1: Mr. 

Computer-generated Voice #2: Security. 

Computer-generated Voice #3: Answer. 

Computer-generated Voice #4: Person. 

John Pescatore: Hi, I'm John Pescatore, Mr. Security Answer Person. Our question for today's episode - I'm about to insert myself into trying to explain to two different management levels the recent news about cryptocurrencies collapsing. The CEO is asking because they are looking at a possible innovative use of blockchain to demonstrate fair trade in our supply chain. The CEO also wants to be prepared if the board asks about our exposure to something like the recent bankruptcy of the FTX exchange. Can you give me a starting point? 

John Pescatore: Timely question, giving all the news photos of the FTX CEO doing the perp walk down in the Bahamas. I'll suggest some words for you eventually, but let me go deep for a bit before I do. First off, I never use the term cryptocurrency. I'll generally say virtual currency with fair quotes around currency. Others use digital currency, as you'll find in the Oxford language definition of cryptocurrency that you'll get if you ask Google for a definition - a digital currency in which transactions are verified and records maintained by a decentralized system using cryptography rather than by a centralized authority. 

John Pescatore: The reason I don't use the term cryptocurrency is that I have a big problem with that simplistic using cryptography part of the definition. If you ask Oxford language to define cryptography, it comes back with the art of writing or solving codes. Yuck. If you look at the NIST glossary, you'll find cryptography defined as the discipline that embodies the principles, means and methods for the transformation of data in order to hide their semantic content, prevent their unauthorized use or prevent their undetected modification. Notice the Oxford definition called cryptography an art while NIST said it takes discipline, principles, means and methods. Would you really want to base the liquidity of your business on transactions that trust a Cap'n Crunch cereal box decoder ring approach for codes? 

John Pescatore: One last point - here's the actual example that Oxford language uses for cryptocurrencies. Decentralized cryptocurrencies such as bitcoin now provide an outlet for personal wealth that is beyond restriction and confiscation. Personal wealth that is beyond restriction and confiscation is not what CXOs and boards of directors should be spending investor resources on. I'm having so much fun with definitions. Let me throw in two more important ones. First, ledger - a ledger is a book or database in which double-entry accounting transactions are stored and summarized. This ledger is the central repository of information needed to construct the financial statements of an organization. It is also a key source of information for auditors. 

John Pescatore: And finally, a definition for blockchain - blockchain is a distributed digital ledger of transactions digitally signed using verified cryptography that are grouped into blocks. Each block is cryptographically linked to the previous one, making it tamper evident after validation and undergoing a consensus decision. OK, thanks for bearing with me. I think you get the idea. 

John Pescatore: Based on all that, here's a paragraph for you to use with management. Blockchain technology using verified cryptography is a very effective and efficient means of delivering trustable records of business transactions. Virtual currencies can - but don't always - use blockchain technologies to attempt to enable new payment systems. Virtual currencies also rarely have any actual backing to establish or maintain value of a unit of their currency, and they rarely result in cost per transaction that is meaningful lower than traditional payment systems. The major attraction to so-called cryptocurrencies has been by individuals and groups interested in evading legal government visibility into their transactions. Let's hope your CEO or board decides they are not in the business of evading legitimate government visibility. 

Computer-generated Voice #1: Mr. 

Computer-generated Voice #2: Security. 

Computer-generated Voice #3: Answer. 

Computer-generated Voice #4: Person. 

John Pescatore: Thanks for listening. I'm John Pescatore, Mr. Security Answer Person. 

Computer-generated Voice #1: Mr. 

Computer-generated Voice #2: Security. 

Computer-generated Voice #3: Answer. 

Computer-generated Voice #4: Person. 

Dave Bittner: Mr. Security Answer Person with John Pescatore airs the last Tuesday of each month right here on the CyberWire. Send in your questions for Mr. Security Answer Person to 

Dave Bittner: And joining me once again is Ben Yelin. He is from the University of Maryland Center for Health and Homeland Security and also my co-host over on the "Caveat" podcast. Hello, Ben. 

Ben Yelin: Hello, Dave. 

Dave Bittner: Interesting article from the folks over at The Daily Dot - this is written by Jacob Seitz, and it's titled "Tech experts ask Supreme Court to rule that Section 230 protections apply to algorithmic recommendations." What's going on here, Ben? 

Ben Yelin: So they wrote this amicus brief, friend of the court brief, for a Supreme Court case that's going to be heard sometime in the fall of 2023. 

Dave Bittner: And who's they? 

Ben Yelin: This is from a group called the Center for Democracy and Technology, and that is a tech advocacy nonprofit. And six other tech experts joined their amicus brief... 

Dave Bittner: OK. 

Ben Yelin: ...In this case. So the case is called Gonzalez v. Google. And it comes from the family of a woman who was killed in the 2015 Paris terrorist attacks. The family in that case is arguing that through recommended videos on YouTube - and Google is the parent company over YouTube, which is why they're the named defendant in this case - users were being shown ISIS recruitment videos, and therefore, the company is partially responsible for the death of their daughter. Now, with our current understanding of Section 230 of the Communications Decency Act, companies are shielded from liability based on the content that is posted on their platform, and therefore, this suit should be dismissed. What the plaintiffs are arguing here is that Section 230 should not cover what they refer to as recommended content. So based on the videos we watch on YouTube, they recommend videos according to their algorithms, generally things that are similar to the interest that we've expressed through our searching of their video databases. 

Dave Bittner: Right. 

Ben Yelin: So this group is writing to argue that Section 230, the shield of liability, should be extended even to recommended videos. They say that the court should rule in Google's favor for the disposition of this case. And they lay out a couple of reasons. One is that the case is treating Google as a publisher and not a provider. So I'm quoting here from the brief. "At issue in Gonzales is whether Section 230 shields Google from liability for allegedly recommending ISIS content posted on YouTube to other users. Petitioners in this case argue that Section 230, which shields intermediaries from liability for publishing third-party content, applies only to claims based on the display of content, not the recommendation of content." 

Ben Yelin: But that distinction is unworkable. If liability is based on recommendations versus targeted recommendations, then crucial tools for content moderation might become legally discouraged. Companies would have to shut down some of the algorithms they use to generate recommended content for users, which would not only hurt the user experience, but might cause them to overregulate and try and knock out certain categories of perhaps politically protected speech from ever appearing on lists of recommended content. And that could be an overbroad inhibition on First Amendment rights. So I think this could potentially be a persuasive amicus brief for Supreme Court justices. It remains to be seen whether justices will agree with them, that there is kind of this false distinction between recommended content and just standard content that's original content that's posted on these platforms. 

Dave Bittner: Can I play the other side here real quick? 

Ben Yelin: Absolutely. Give us the devil's advocate's perspective. 

Dave Bittner: Well, I'm just trying to imagine the difference - and these are always imperfect, but I'm imagining if I'm someone who posts or supplies a bulletin board for people to post their information on versus I'm the editor of a newspaper who decides what goes on the editorial page and what you will see - with one of them, I have no real control or interest or influence on what gets pinned to that bulletin board. But in the other, I'm making the decision as to what rises to the top, what gets put in front of you first and foremost. Seems to me like that's what these recommendation algorithms are doing. They're making these decisions. Now, in this case, it's for engagement, right? They want to sell you more - they want to keep you there... 

Ben Yelin: Right. 

Dave Bittner: ...And sell you more ads. But do you think that argument holds any water? 

Ben Yelin: Problem is, in the example that you're citing, you have one instance where the platform is exercising no editorial control - that's the bulletin board... 

Dave Bittner: Right. 

Ben Yelin: ...And a situation where the platform is exercising full editorial control. That would be the newspaper. 

Dave Bittner: Yeah. 

Ben Yelin: We're kind of in an in-between area here, because there's no human being who's saying let's, you know, sit around a table and think about what people want to watch in their YouTube videos and have a conscious conversation, weighing of pluses and minuses and see, you know, what type of Muppets video we're going to recommend next for Dave Bittner. That's a conversation that's just not happening. 

Dave Bittner: Right. 

Ben Yelin: It's all being done via algorithm. 

Dave Bittner: But humans wrote the algorithm. 

Ben Yelin: Humans wrote the algorithm, sure. But humans are not playing sort of the same hand that they are in your hypothetical... 

Dave Bittner: OK. 

Ben Yelin: ...'Cause they're not exerting that full - the full extent of editorial control. 

Dave Bittner: Yeah. 

Ben Yelin: It's certainly a valid - I mean, it's certainly a valid argument. I don't think there's a right answer one way or another, but I don't think that metaphor is perfect since - simply because of the involvement of this automated system where recommended content is generated without human eyeballs, except as, you know, it relates to the algorithm being created in the first place. 

Dave Bittner: I guess what I'm trying to imagine is could we see a future where, in order for a platform like YouTube to enjoy the immunity they do through Section 230, they would have to eliminate algorithmic recommendations? In other words, host the videos, make them searchable, but don't put your thumb on the scale. 

Ben Yelin: So that's, I think, what these platforms are trying to avoid, is the situation where because of the threat of liability, they're not able to have recommended content. 

Dave Bittner: Right. 

Ben Yelin: I think there are people with very good-faith policy reasons for coming down one way or another on this issue, but I think the industry is frankly freaked out because recommended content is the engine of growth. 

Dave Bittner: Right. 

Ben Yelin: People go to YouTube and stay on YouTube because of recommended videos. Believe me... 

Dave Bittner: Yeah (laughter). 

Ben Yelin: ...There are a lot of things that I would have never searched myself... 

Dave Bittner: Oh, heck yeah. 

Ben Yelin: ...That I am learning a lot about... 

Dave Bittner: Yeah. 

Ben Yelin: ...Because of these algorithms. 

Dave Bittner: Right. 

Ben Yelin: So it'd be a major hit to their business model. 

Dave Bittner: Right. 

Ben Yelin: So I'm not accusing the Center of Democracy and Technology here of doing this - writing this amicus brief for parochial reasons to protect the industry. I'm just saying that, obviously, the industry's interests here are extremely large... 

Dave Bittner: Yeah, yeah. 

Ben Yelin: ...To say the least. 

Dave Bittner: I guess I'm saying, in my mind, perhaps it doesn't have to be all or nothing, that, you know, they could still enjoy some 230 protection, but give a little, also, and, you know, maybe not have the wheelbarrows full of money rolling in, right (laughter)? 

Ben Yelin: Yeah. 

Dave Bittner: Not that there's anything wrong with that. I mean, you know, you provide a good service for people, and they enjoy it. There's nothing wrong with getting paid for that. But I think - I don't know, I guess a lot of people - there's a strong case to be made that algorithmic recommendations are a bit out of whack. 

Ben Yelin: I think they are out of whack. I mean, if I were to take off my legal hat and just look at this as what are the societal effects of algorithms, I think they're pretty destructive. 

Dave Bittner: Yeah. 

Ben Yelin: And this is something we've talked about on this podcast and on our "Caveat" podcast before, but I've seen it happen just with acquaintances in my own life. They're interested in video games, so they start searching YouTube videos of people playing video games, and then a lot of other people who were interested in these same video games also have dabbled in white supremacy and the Proud Boys. 

Dave Bittner: Right. 

Ben Yelin: And so the algorithm directs them to those types of videos... 

Dave Bittner: Yeah. 

Ben Yelin: ...Which can be dangerous. And it's bad for all of us. But, you know, I think the argument here is twofold. One is that the user experience on these sites would never be the same if companies were forced to limit recommendations because of Section 230 liabilities. It would significantly hurt the user experience. And if there was some sort of middle ground, it would be really hard to distinguish between original content and recommendations. There's just - there's kind of practical difficulties in getting that to work. And which types of recommendations would be shielded with Section 230, and which would not, based on how the algorithm was structured, etc.? That's just a problem that would be pretty unworkable and, I think, not something that the Supreme Court would want to supervise in any meaningful way. 

Dave Bittner: Yeah. 

Ben Yelin: So that's - those are kind of the parameters here. 

Dave Bittner: OK. All right. Well, Ben Yelin, thanks for joining us. 

Ben Yelin: Thank you. 

Dave Bittner: And that's the CyberWire. For links to all of today's stories, check out our Daily Briefing at The CyberWire podcast is a production of N2K Networks, proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Tre Hester, with original music by Elliott Peltzman. The show was written by John Petrik. Our executive editor is Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here tomorrow.