Hacking Humans 4.18.24
Ep 286 | 4.18.24

Is change presenting a window of opportunity for attackers?


Trevin Edgeworth: You wouldn't believe the number of people that have local administrator rights, full administrator privileges of their own machines, and they just don't need it.

Dave Bittner: Hello, everyone, and welcome to N2K CyberWire's "Hacking Humans" podcast, where each week we look behind the social engineering scams, phishing schemes, and criminal exploits that are making headlines and taking a heavy toll on organizations around the world. I'm Dave Bittner and joining me is Joe Carrigan from the Johns Hopkins University Information Security Institute. Hey, Joe.

Joe Carrigan: Hi, Dave, how are you?

Dave Bittner: We've got some good stories to share this weekend. Later in the show, my interview with Trevin Edgeworth, a red team practice director at Bishop Fox. We're talking about account compromises at X. We'll be right back after this message from our show sponsor. Okay, Joe. So, before we get going here, we have some follow-up. What do we got?

Joe Carrigan: Yeah, first Erin writes in to say hi from Northern Ireland. Well, hello, Erin.

Dave Bittner: Oh, that's nice.

Joe Carrigan: I love the podcast and have been listening for over a year now. I wanted to share something interesting that I found recently. A high-profile politician in Northern Ireland was arrested and charged with a sexual offense, awaiting trial right now.

Dave Bittner: A politician?

Joe Carrigan: A politician?

Dave Bittner: I'm shocked.

Joe Carrigan: A sexual offense and a politician? Say it ain't so. However, under UK law, it is contempt of court to share the details of the case before it goes to trial. But naturally, everyone is curious and furiously googling to find this information. Some of the top search results are incredibly suspicious URLs made up of strings of letters and numbers with bold titles promising to give every detail of the case. I find it interesting that scammers are now keeping up with the news, working out what people are searching for, and flooding Google with malicious results. Or perhaps it's automated somehow and they can just tag onto whatever is trending on Google. That's a good observation because Google has all those analytics that are out there and they tell you what's trending.

Dave Bittner: Right.

Joe Carrigan: So it would be minimal effort to just start using search engine optimization to start injecting malicious sites into the search results or whatever is trending on Google.

Dave Bittner: Yeah.

Joe Carrigan: I have not clicked on any of the links for obvious reasons, but I imagine that they are some sort of phishing or malware. Keep up the good work, Erin. Yeah, this is -- it doesn't surprise me, but it is novel and interesting, I think. I don't -- you know, I'm not shocked by this, but I hadn't thought of this before.

Dave Bittner: Yeah. Yeah, I mean, I would -- I think Erin is dead right here, dead on right that this is what's going on and I'm sure it's automated. Somebody's just taking headlines and maybe even being regional about it, and just pumping Google full of fake websites. And we see this all over, people complaining about how the quality of Google search results has just gone down the drain.

Joe Carrigan: It has really. You know, today, Dave, I actually had a question that Google would not answer for me.

Dave Bittner: Really?

Joe Carrigan: Yeah, so they couldn't tell me what it was. I couldn't find a way to ask it. So, I actually wound up asking ChatGPT instead. And I got the answer.

Dave Bittner: And you got -- well, you got an answer.

Joe Carrigan: I got the right answer because then I went -- and went back to Google and googled the answer that I got.

Dave Bittner: I see, so ChatGPT set you on the right path, it gave you the right terms with which to search Google.

Joe Carrigan: Yes.

Dave Bittner: Okay.

Joe Carrigan: Yeah. I mean, what a circular horrible way this is to do things.

Dave Bittner: Right. But, yeah, it's just unreal to me how bad Google has gotten. It's an ad delivery network that occasionally, you know, helps you with search results these days.

Joe Carrigan: Yeah, it's become less and less helpful over time.

Dave Bittner: It seems like they've lost their way when it comes to that. But I'm not sure how you would fix that given the way that it can be gamed with, you know, these automated systems taking AI engines and, just like we described, taking popular things, news items, and having AI regurgitate them and then pasting them on websites, you know, probably millions of slightly compromised WordPress sites on phantom pages that just work for Google SEO. And here we are.

Joe Carrigan: Yeah, that's what happens.

Dave Bittner: Yeah. Well, thank you, Erin, for writing in, do appreciate it. That's an interesting observation and thank you for sharing that with us. We've got another letter here from a reader named Jonathan -- or a listener, rather I should say named Jonathan who wrote in. I'll summarize some of the things that Jonathan highlighted. Said that -- well, he was writing in about Apple's non-rate limiting multi-factor authentication.

Joe Carrigan: Right.

Dave Bittner: And I believe you would refer to it as a vulnerability.

Joe Carrigan: Yes.

Dave Bittner: And Jonathan took issue with that or not?

Joe Carrigan: He said he did, but eventually he did not.

Dave Bittner: Okay.

Joe Carrigan: Initially he did, but he --

Dave Bittner: So he came around on his own.

Joe Carrigan: Right. He looked it up. Yeah, vulnerability doesn't have to be something in software.

Dave Bittner: Yeah.

Joe Carrigan: It doesn't have to be -- it can be in a system, it's just something I can -- it's something in a system that can be or potentially could be exploited. Like, for example, if you have a process for writing checks that one person writes a check and signs a check, well that's a vulnerability in that process.

Dave Bittner: Like if I was a bank and I kept stacks of $100 bills right by the front door.

Joe Carrigan: Right.

Dave Bittner: That would be a vulnerability.

Joe Carrigan: That would be a vulnerability. Right.

Dave Bittner: Okay.

Joe Carrigan: A profoundly bad idea to begin with. Nobody would say. But yes, technically I think that would be a vulnerability.

Dave Bittner: Here at Joe's Savings & Loan, we keep stacks of $100 bills right by the front door for your convenience.

Joe Carrigan: Take one as you walk out.

Dave Bittner: What could possibly go wrong? That's funny. Well, thank you, Jonathan, for writing in. Jonathan shared a lot of different things with us, and unfortunately, we don't have time to share all of them, but we do appreciate you writing in. We had another comment from someone named Anders from Sweden who wrote in and said, "Hi there, just listen to the chat with Robert Blumofe. This article is worthy of a read on the same and related topics if you haven't already devoured it, of course." And the article that Anders is talking about is actually a research paper titled "The Theory is All You Need, AI, Human Cognition, and Decision Making." I have the abstract here. And basically, what this comes down to -- I'm going to way over-simplify this, which is what I do best. So, this is saying that human cognition is a process of experimentation, it's a process of strategic decision-making, and that AI uses a data-based approach -- is it data-based -- data-based approach, using prediction, and that humans use theory-based causal logic. We're already down in the weeds. But it's an interesting read. I actually did go and check it out. And if you're into this kind of thing, if you're someone who's pondering how AI works and want a thoughtful discussion of what some of the differences may be between how artificial intelligence thinks -- I'm putting "thinks" in air quotes -- and how humans do, is an interesting write-up. it's definitely worth your time if that's something that you're into.

Joe Carrigan: Is this specifically talking about LLMs or is it --?

Dave Bittner: Yes.

Joe Carrigan: Okay.

Dave Bittner: Yes.

Joe Carrigan: So yeah, those are more -- there have been attempts in the past, artificial intelligence is a very broad field.

Dave Bittner: Right.

Joe Carrigan: Right now, we're focusing on all these generative things, but the field has been around for decades.

Dave Bittner: Yeah.

Joe Carrigan: So, I mean, my research paper for my Master's degree back in the early 2000s was an artificial intelligence paper.

Dave Bittner: Oh, interesting.

Joe Carrigan: But it was a -- I mean, now it's not even considered -- the problem I was researching is not even considered an artificial intelligence problem anymore. It's an algorithmic problem now.

Dave Bittner: I'm thinking right. I'm just imagining your Master's thesis paper saying, "Why Eliza is wrong about everything she's told me and I hate her very much."

Joe Carrigan: Yes.

Dave Bittner: What was your Master's thesis on?

Joe Carrigan: It was on a lightweight artificial intelligence route planning algorithms for use in low-cost microcontrollers for use in robotics.

Dave Bittner: Wow.

Joe Carrigan: So --

Dave Bittner: Can't believe I haven't stumbled across that one, Joe.

Joe Carrigan: No, it never got published. I mean, I can send you a copy of it.

Dave Bittner: No, but I think like most of those things, it's very specific, right?

Joe Carrigan: Right, the more -- the longer your title, the better your research area is.

Dave Bittner: Interesting.

Joe Carrigan: But basically I wound up implementing a way to plan a route through a known environment very quickly on a 20-megahertz microcontroller with limited memory.

Dave Bittner: Wow. Well, that's cool.

Joe Carrigan: Yeah, it was. I had a lot of fun doing it.

Dave Bittner: I'll bet, I'll bet. These are the days before things like Waze, right?

Joe Carrigan: This is the day before. These are in the days before the system on a chip, right? You get a Raspberry Pi now and there's no such thing as a low cost. I mean, a low-cost microcontroller at the time was like $8.

Dave Bittner: Okay.

Joe Carrigan: Right. Now you get a Raspberry Pi Zero for $5 that has so much more capability.

Dave Bittner: Right. Right. This is why your electric toothbrush can play Doom.

Joe Carrigan: Right. Exactly. That kind of thing. It's, you know, the processing power that's available cheaply now is so much more than -- I mean, I was really -- I don't know, I probably should have been more forward-thinking about my problem, but it was still an interesting problem, and I still came up with a pretty good solution, I think.

Dave Bittner: Yeah, well, good for you. You're more educated than me, that's for sure. I never went past a Bachelor's degree.

Joe Carrigan: I got two of those, Dave.

Dave Bittner: Well, there you go. See, now you're just showing off. All right. Well, thank you, Anders, for sending in this interesting article, we do appreciate it. And we will have a link to that in the show notes. All right. Let's move on to our stories here. And, Joe, why don't you kick things off for us?

Joe Carrigan: Dave, I saw a story from Sebastian Herrera at the Wall Street Journal. And I want to bring this up because we don't talk enough -- maybe we do. But Amazon has this capability out there for small businesses and individuals to sell products on their platform. And people can make money on that. I don't know that -- I guess people do make money on it because people do it. You wouldn't be doing it if you were losing money. But this story starts off talking about a woman named Nicole Barton who had to rebuild her business after being swamped with fraudulent returns. See, she switched from clothing to consumable items like pet food.

Dave Bittner: Okay.

Joe Carrigan: Right. Things that you can't necessarily return.

Dave Bittner: Okay.

Joe Carrigan: What she would get -- she would send people like coach wallets and she would get no-name knockoffs in the return envelope. So she'd send out a real product and get a counterfeit product back. Like not even a good counterfeit product.

Dave Bittner: So she was selling authentic items.

Joe Carrigan: Right.

Dave Bittner: And what was coming back to her were knockoffs.

Joe Carrigan: Right. Knockoffs or just -- well, there was one case, she sold some Nike football cleats and got returned a pair of flip flops in the box.

Dave Bittner: Wow.

Joe Carrigan: Another person, Barbara sells household items, but has received cable TV boxes and used soap bars. Now, I don't know about you, Dave, but if I sent something out and got a used soap bar back, I'd be, first off, mad that somebody stole something from me. But I'd be grossed out as well.

Dave Bittner: That's true

Joe Carrigan: Right?

Dave Bittner: Yeah.

Joe Carrigan: Another person named Jess sells outdoor coffee products, and they've gotten back Christmas ornaments and toy planes. One person said they sent out human nail clippers and received used dog nail trimmers in return.

Dave Bittner: So what's going -- all right. So, what's -- I mean, I have an idea of what's going on here but unpack it for us.

Joe Carrigan: So, what's happening here is Amazon is letting these sellers sell things, and then Amazon lets them take returns, lets the sellers -- the buyers return items, and they don't do a lot of work on the return side.

Dave Bittner: I see.

Joe Carrigan: And there are, either networks of thieves out there and individuals who do this, there are even online forums that coach people in how to make fraudulent returns on Amazon. And there was an executive meeting back in 2004, somewhere around that timeframe.

Dave Bittner: Almost 20 years ago.

Joe Carrigan: Yes, 20 years ago. Where someone said to Bezos how the company would handle bad actors in the return process, right? And Bezos said that above all, the company would have to put customers first when it came to returns. And that it should be smart enough to use its systems and data to catch bad actors. So, here's my thinking on this, Dave. If you are Amazon or Jeff Bezos, I don't know how involved he is anymore, right? You're Amazon and you are sending out products to somebody and getting bogus things returned to you. That's going to be much more important to you than if you're also offering your platform service for selling things for people. And then they get a -- this is actually a variation of the old retail scam called "rocks in a box," right? Where they put -- people would put rocks in a box and return -- they go out and by a VCR -- this is how long ago I worked in retail, Dave. I worked at Best Buy. And one of the things that they would say is, watch out for the rocks in the box at the return center. You know, if you get a sealed-up box, even if it looks real, you have to open that box up and take a look inside to see if the item is still in there. Because these people do this. But it seems to me that Amazon is not really focusing on the requirements of their platform customers, right, these sellers. They're only focusing on the people that buy things. And I refer to the platform customers as customers. They are customers. Amazon is just brokering at this point when they're setting up this deal between these two people. Here's a great quote in the article from an Amazon spokeswoman.

Dave Bittner: Okay.

Joe Carrigan: Ready?

Dave Bittner: Yep.

Joe Carrigan: The company has, "no tolerance for fraudulent returns on Amazon."

Dave Bittner: Okay.

Joe Carrigan: She said the company invests significantly in detecting and preventing fraud, including employing teams devoted to the issue. And that it provides resources to sellers who report abuse and so they can receive reimbursements. This sounds almost exactly like last week when I was talking about Facebook and the comment from Facebook. Again, I think someone from this big tech company just copied a section out of a list of excuses that legal has given them and pasted this in an email to the Wall Street Journal.

Dave Bittner: We take your privacy seriously.

Joe Carrigan: Right, yeah.

Dave Bittner: Okay.

Joe Carrigan: But the problem is that that process by which sellers can report these losses, report the abuse is long and takes so much time that sellers may not have, it's not worth it to them. Additionally, even if Amazon does accept it, the seller may not get the full amount back of what the item cost them. And many times what sellers wind up doing is just taking the loss.

Dave Bittner: Yeah.

Joe Carrigan: There is a National Retail Federation statistic in here, a number. They say that return fraud has become a major issue for our industry. About 13.7% of returns in 2023 were fraudulent.

Dave Bittner: Wow.

Joe Carrigan: 13.7%. What is that, a seventh?

Dave Bittner: Yeah.

Joe Carrigan: Of returns to retail outlets are fraudulent. So I don't know. I'm not so terribly concerned about Amazon, I think they can take care of themselves on this. But it doesn't look to me like they're taking care of their sellers who are providing goods or other goods that are then sold. And then Amazon's making a ton of money on the sale, right?

Dave Bittner: Right. Amazon has little to lose. Amazon has virtually nothing to lose on this.

Joe Carrigan: Virtually nothing to lose. Right. Yeah, because if -- you know, they -- I guess if you're doing this with a -- maybe if you're taking advantage of Amazon's Amazon Prime shipping. But I mean, that's going to be like noise, you know, low-level noise losses to them.

Dave Bittner: I wonder -- so, okay, a couple things. First of all, I'm trying to puzzle through the entire process of this scam. So, first of all, I'm wondering is what -- let's make this interesting and just say I'm buying a laptop from you, Joe. Okay. So, I buy a laptop, brand new, you know, lovely, let's say it's a high-end Chromebook. All right. And I buy that from you, where I guess we think chances are I'm buying this with a stolen credit card.

Joe Carrigan: I don't think that's part of the issue.

Dave Bittner: Well, let's puzzle through this. Stick with me here.

Joe Carrigan: Okay.

Dave Bittner: So let's say I buy that with a stolen credit card, right? And then I go to return it and I send back the box of rocks to Amazon. Amazon credits me with the return. Now I've got the stolen laptop or the purchased laptop and I've got the money from the return, so I got a free laptop.

Joe Carrigan: Right.

Dave Bittner: You get a box of rocks. Amazon isn't out anything.

Joe Carrigan: Right.

Dave Bittner: I guess what would be -- if they were using legit credentials to buy this stuff, the profit comes from the refund on the return, they get the stuff and --

Joe Carrigan: Here's a more likely scenario. Somebody opens an Amazon account, they get some gift cards, probably through scamming.

Dave Bittner: Right.

Joe Carrigan: And they get a gift card balance of $1000 on their account.

Dave Bittner: Okay.

Joe Carrigan: They buy the laptop from me.

Dave Bittner: Yeah.

Joe Carrigan: Right? I send them the laptop, I get $700 credited to my account. They then "return" the laptop, with air quotes. And instead of putting in the laptop, maybe they put in, like you said, rocks, or perhaps maybe they go out and buy one of those little VTech kid's laptops for 50 bucks, put that in there, and send it back to me. And then I have something back. So now I have to go through the return process for Amazon. Meanwhile, they're taking the $700 laptop and selling it for $1000 or $500 in cash, right? So they're pocketing $500. Their account balance still says $1000 on Amazon.

Dave Bittner: They've laundered the gift card money.

Joe Carrigan: They've laundered the gift card money, turned it into cash.

Dave Bittner: And I'm also thinking of the poor person at Amazon who's handling returns, who if we know anything about Amazon, it's that they work their people at a frantic pace.

Joe Carrigan: Right, they do.

Dave Bittner: So, that poor person is on some kind of a quota of just, you know, churn, churn, churn. I imagine them opening the box, just glancing inside and seeing it looks like a laptop to me, you know, credit, okay, and off it goes.

Joe Carrigan: I've done Amazon returns over at Whole Foods here in Columbia. I've just walked in with a box and said, "Here's my return receipt, you know, my little barcode." They scan it, they take the box, they throw it in a big bin. They say, "Have a nice day."

Dave Bittner: Yeah.

Joe Carrigan: And that's it. They don't even look in the box.

Dave Bittner: Right.

Joe Carrigan: They -- Yeah, I think that's -- I don't even think anybody's inspecting it. I think the only problem comes when you have a seller who gets the box back and they open it up. And this is a small business person. Maybe they just do this in their spare time to make a little extra cash in the side.

Dave Bittner: Well, you know what? I think it's worse than that because I think we're at the point now where certainly if you want to be an online seller, you pretty much have to be on Amazon.

Joe Carrigan: Yeah.

Dave Bittner: It's very -- I mean, it's so hard. I guess Walmart would be an alternative. I'm thinking of big online platforms, right? I think Walmart would be a distant second, but they are a big online platform.

Joe Carrigan: Right.

Dave Bittner: But I don't know that you have a choice to not be on Amazon.

Joe Carrigan: Well, I mean, you got things like Shopify out there that let you build your own storefront. But then you have to drive traffic to your website.

Dave Bittner: Yeah. I buy a ton of things on Amazon.

Joe Carrigan: Right.

Dave Bittner: And I mean, those poor Amazon delivery people are at my house pretty much every day.

Joe Carrigan: Yeah, I know the guy's name that comes to my house. It's Ernesto.

Dave Bittner: And the reason is it's so easy and I can get anything. You can get anything on Amazon. And as much as I disdain some of their business practices, it's so easy. So, it's hard to -- it's hard to reconcile.

Joe Carrigan: I don't -- you know, I like the convenience and I like the attitude here that you have a customer-focused business, I get that. But when you start offering your platform out to other people, you have more than just you and the customer now. You have other stakeholders here, and it looks to me like Amazon is ignoring those stakeholders.

Dave Bittner: The other thing this makes me wonder about is that quite often I have ordered something from Amazon and it gets delivered. And along with the item, there will be a postcard that says -- and we've seen different postcards, there's the postcard that asks you to review the item, those sorts of things. But I've also seen a postcard that says, "If you have any issue with this, please do not return it to Amazon. Please let us know. We will do whatever it takes to make this right." And sometimes -- and I've been through that process, and many times they don't want you -- they'll send you the replacement, but they do not want you to return it to Amazon because they don't want to be dinged by Amazon for, there's some kind of analytic that tracks that. They don't want to be a provider that's causing a lot of returns. So, there's just so much gaming going on, it's just -- it's a dirty system.

Joe Carrigan: It is. It is.

Dave Bittner: And it's a shame.

Joe Carrigan: A lot of retailers are asking, or, you know, customers, resellers, Amazon resellers, they have a name for this program, I don't know what it is. But they're asking Amazon to put some friction in the system for the sake of their businesses. Like there should be a delay when the customer sends something back, where the customer doesn't get the refund until the seller approves the refund. Now, there are opportunities for that to be exploited by bad sellers too.

Dave Bittner: Right.

Joe Carrigan: But if 13% of your returns are fraudulent, I think that merits this kind of attention, this kind of action.

Dave Bittner: Well, and I think what we're seeing is that Amazon puts the needs of the customer above the needs of the partner reseller.

Joe Carrigan: Right.

Dave Bittner: And you can understand why when you describe that, that sounds like a good impulse, but it opens it up to exactly this kind of fraud.

Joe Carrigan: Exactly.

Dave Bittner: Interesting. All right. Well, we will have a link to that story in the show notes. I guess, well, before we move on here, is there anything in terms of people looking out for this? I mean, because this is real -- it's really the resellers who are -- who have the problem here. It's not the folks -- it's not consumers like you and me who are just buying stuff and then legitimately returning stuff.

Joe Carrigan: Yeah, the article talks about an FTC lawsuit, an anti-trust lawsuit that's already dinging Amazon pretty hard, and a lot of these things.

Dave Bittner: Okay.

Joe Carrigan: So, like, for example, in order for you to have your items show up in searches, you have to buy Amazon ads. So, they're making -- Amazon is not -- is favoring people who buy ads against people who -- resellers who buy ads against people who don't buy ads.

Dave Bittner: Right.

Joe Carrigan: Right. And they're essentially making you do this.

Dave Bittner: Right. Yeah. Yeah, it'd be interesting to see how that plays out over the next several years.

Joe Carrigan: Yes. Yeah, because it's going to be a while.

Dave Bittner: Right, right. All right. Well, like I said, we will have a link to that story in the show notes. My story this week is a comparatively quick one, I guess. This is an article from Wired. This was written by Reece Rogers, and it's titled, "How to Protect Yourself and Your Loved Ones from AI Scam Calls." So this is right up our alley.

Joe Carrigan: Yep.

Dave Bittner: And this article talks about how generative AI is advancing and, of course, that allows scammers to create convincing clones of people over the phone and using, you know, little audio clips, and they can even do it in multiple languages. And it's really hard to detect these things because the quality is just getting better and better. And it's happening faster and faster. So, they have a number of recommendations here for folks to best protect themselves from these sorts of AI scam calls. Number one on the list, when you and I have talked about many times, hang up and call back.

Joe Carrigan: Right.

Dave Bittner: Right?

Joe Carrigan: Yep.

Dave Bittner: And use a known number, don't use the number that they give you or, you know, don't give the number that they emailed -- don't use the number that they emailed or the number that shows up on your caller ID, use a number that you look up on the internet. Although --

Joe Carrigan: Don't use Google to do that.

Dave Bittner: Make sure it's not an ad.

Joe Carrigan: Right. Right.

Dave Bittner: Make sure the number you look up--

Joe Carrigan: Go to the company's website.

Dave Bittner: There you go. Go to the company's website and get the phone number from there. It's just -- it's disheartening, isn't it?

Joe Carrigan: It is. Dave, there's just so -- I mean, it's I want to go live in the woods, Dave.

Dave Bittner: Yeah?

Joe Carrigan: Yeah.

Dave Bittner: Yeah, a friend of mine and I in high school, we used to say that if things got too bad, we would go herd sheep in New Zealand. That was our exit plan. And it's getting more and more, you know, attractive.

Joe Carrigan: New Zealand looks great. It's beautiful.

Dave Bittner: They're very particular about who they let in though, I understand, like you can't just go there. You got to --

Joe Carrigan: You can't just go there and got sheep.

Dave Bittner: No. You can visit. But if you want to stay, you got to earn it.

Joe Carrigan: Okay.

Dave Bittner: One of the other things they talk about here is establishing a secret safe word with loved ones to provide additional security. That makes sense.

Joe Carrigan: It does.

Dave Bittner: I'd say, make it a silly word, like abalone, or, you know.

Joe Carrigan: Right.

Dave Bittner: Because a silly word is easy to remember.

Joe Carrigan: Yes.

Dave Bittner: You know, have you ever been -- have you ever been on the phone with some poor customer support person who's trying to ask you what your secret word is and you go through a list of them? You're like, well, it might be baseball. Let's see, it could be Star Wars, it might be -- and then like, no, no, no, you're getting warmer.

Joe Carrigan: Yeah, I keep those in my password manager in the notes. So I know --

Dave Bittner: Oh, for each site.

Joe Carrigan: Right.

Dave Bittner: Yeah, that's smart, that's smart.

Joe Carrigan: Yeah, for like my banks and any financial institutions.

Dave Bittner: Right, right. They say that you can ask personal questions that only the real person would know. They talk about details about recent meals, you know, things like that, things that are timely and personal that the scammers wouldn't know. I mean, I guess a clever scammer could get around that by saying, "I don't have time for those kind of details."

Joe Carrigan: I don't know that you're you, so I don't have time for this either, goodbye.

Dave Bittner: Right, right. And they talk about awareness, just being aware that anybody's voice can be cloned now.

Joe Carrigan: That is such a huge thing. And you and I, Dave, we're actually at pretty high risk for this.

Dave Bittner: Yeah.

Joe Carrigan: Because there's just hours and hours and hours of us talking.

Dave Bittner: Yeah, thousands of hours.

Joe Carrigan: Right.

Dave Bittner: But I think the most recent thing from OpenAI, they have an audio generation tool that they're promoting, that I don't think they've turned it loose on the world yet, but they've been running demos of, and it can do a convincing job with only 15 seconds of sample audio, which for a lot of people would be their outgoing voicemail message.

Joe Carrigan: Yes.

Dave Bittner: Right?

Joe Carrigan: Yeah.

Dave Bittner: So imagine that. All I have to do is call you. You're not going to answer the phone because it's from an unknown number. Your voicemail message plays and I use that to clone your voice.

Joe Carrigan: Right.

Dave Bittner: Yeah. So awareness is crucial. So make sure you share that with your friends and family. And then last but certainly not least is this notion of emotional manipulation.

Joe Carrigan: Right.

Dave Bittner: And it's important to maintain your skepticism and avoid impulsive decisions because these folks are going to do everything they can to mess with your mind and put you in a state where you're not thinking rationally.

Joe Carrigan: Right. Now I have gone through and talked to all my family members.

Dave Bittner: Yeah.

Joe Carrigan: And I've said this before, but I've told them, I'm never going to call them and ask them for money.

Dave Bittner: Mm-hmm.

Joe Carrigan: That's not --

Dave Bittner: They say, "But, Joe, you always call and ask us for money."

Joe Carrigan: Right.

Dave Bittner: Say, no, I haven't been in college for a long time, Mom.

Joe Carrigan: Right, come on, dude.

Dave Bittner: Right.

Joe Carrigan: Yeah, so no, I've actually gone through the trouble and said, you know, my voice is out there a lot. There's tons of availability. If somebody wanted to clone my voice, they would have no trouble doing it. They could probably even use this piece of data right here. Although I would like to say that my voice should never be cloned.

Dave Bittner: Oh.

Joe Carrigan: Yes. I'm just going to say that.

Dave Bittner: Oh, I see. You're trying to mess up the training data?

Joe Carrigan: Right. I'm trying to -- I'm trying to maybe -- I don't know. It's not going to work.

Dave Bittner: It's good.

Joe Carrigan: What am I doing here? I'm just wasting -- wasting the listeners' time.

Dave Bittner: Well, this article from Wired is a good one. It is particularly a good one to send around to your friends and family, you know, share it on the -- on your groups online and, you know, work email -- work newsletters, things like that. It's just real good general awareness for protecting yourself against these AI scam calls. So, highly recommended, and we will have a link to that in the show notes. All right. Joe, it is time to move on to our "Catch of the Day." [ Soundbite of Reeling in Fishing Line ] [ Music ]

Joe Carrigan: Dave, our catch of the day comes from Jay who sent us a LinkedIn post Chris Stones, which is about an in-person scam. We never actually had an in-person scam catch of the day. In fact, when Jay sent this to us, he said, "I don't think this will be a good catch of the day, but it might be a good story." But I think it'll be a great catch of the day.

Dave Bittner: Okay.

Joe Carrigan: So, it is a story, so I'm just going to ask you to read Chris's story.

Dave Bittner: All right. Chris writes, "This morning at Piccadilly train station, I was getting some cash out of one of the machines when a well-dressed 50-something professional-looking lady approached me. She said, 'I wonder if you can help me. I'm en route to London and have just realized I've left my purse at home. If I transfer 50 pounds to your account now, would you be able to give me the cash?' Keep in mind this was around 6:20 and pre-first coffee so I wasn't at my sharpest. But before I'd even digested the question, she thrust her phone at me preloaded with her HSBC app open, showing that she had around 79,000 pounds in her current account, and clearly was good for it. My spidey sense kicked in, and whilst ordinarily, I do enjoy winding scammers up and dragging it out until they realize I'm toying with them. In this instance, with a Costa to Q4 and a train to catch, I went with a straight no. Her reaction was superb, fake, and loud indignation. 'Do I look like a scammer to you? Do I really look like a scammer? I'm so offended.'"

Joe Carrigan: You didn't look like a scammer until just now. Now, yes, you look like a scammer.

Dave Bittner: "Then proceeded to stomp off to the nearest exit, not platform six to Houston as previously suggested. Before exiting, however, she returned and shouted, 'Leave your pants in the '90s,' which I thought was hilarious as chinos never go out of fashion." All right.

Joe Carrigan: What exactly are chinos?

Dave Bittner: Chinos? They're like -- well, when I think of chinos, I think of like a beige semi-casual sort of pant.

Joe Carrigan: Like a khaki?

Dave Bittner: Yeah, like a khaki. That's what I would describe as a chino. Chris goes on and says, "I reported her to a station employee whilst waiting for my commuter coffee and left it at that, but it has niggled me on the train. What's the point of this post? General awareness, I suppose. This isn't a new or innovative scam for sure, but the presentation was, the woman had a nice travel bag, a laptop bag, was well-dressed, and this initially made me doubt my gut feeling. Trust your gut, folks. If it looks like a duck, swims like a duck, and quacks like a duck, it usually is a duck."

Joe Carrigan: Indeed.

Dave Bittner: So what's the scam here, Joe?

Joe Carrigan: Oh, the scam is that he's going to give her 50 bucks, and she's going to fake transfer 50 bucks to him from her 75,000- or 79,000-pound balance, right? And so she can take the train somewhere, but he's never going to get the 50 bucks.

Dave Bittner: Yeah, so it's probably -- it could be a scam app on her phone that looks like a banking app, but it's not.

Joe Carrigan: Right, it's not an app. It's just another app. We talked about this last week as well with the guy that stole the laptop from the people. This seems to be a common thing in the UK where they have these fake apps and they pretend to send money to you and get something in return. And you get nothing. It's the perfect crime, Dave. By the time you realize what's up, you're far away.

Dave Bittner: Right, right. All right. Well, thank you, Jay, for sending that in. And we will have a link to that post from Chris Stones over on LinkedIn. We'll have that link in our show notes. [ Music ] Joe, I recently had the pleasure of speaking with Trevin Edgeworth a Red Team Practice Director for Bishop Fox, which is a cybersecurity organization. And we were talking about account compromises at X, the company formerly known as Twitter. Here's my conversation with Trevin Edgeworth.

Trevin Edgeworth: I think it just shows that, you know, the bigger you are, the more you can potentially be targeted. If you're in the line of fire, if you have some kind of business service or a functionality that an attacker needs, requires in order to achieve their attack objectives, then there's confidence, there's no fear about going after even large targets like the SEC.

Dave Bittner: I think a lot of the folks who are surprised to see, you know, a security company like Mandiant fall victim to this sort of thing. I suppose one lesson to be taken from that is that this could happen to anyone. But at the same time, I think folks are left scratching their heads that an organization that prominent wouldn't have had more security in place.

Trevin Edgeworth: Yeah, absolutely. I think it goes to show, in my profession, a lot of times, it's like we're approaching a giant thick wall. And from the outset, it looks like this walled fortress. And then there's a tiny little crack in that wall and so you pull out your scalpel and you're kind of analyzing that crack. And before you know it, that entire wall has crumbled because small things can ultimately turn into very significant exposures. And in this case, it was a case of, you know, multiple situations that were chained together to pull off that heist and one of which being that they were able to switch the phone number associated with the account to a different device. So, that was a key issue and then they were able to get a credential changed on that. So just a couple of small things and suddenly they can take over an account that can be used for their purposes.

Dave Bittner: And for folks who aren't familiar with red teaming and what goes into that, can you give us a little overview of the kinds of things that you and your team there at Bishop Fox do?

Trevin Edgeworth: Yeah, absolutely. So again, I'm the Red Team Practice Director for Bishop Fox and I lead a very extraordinary and talented team. We do all things related to what we call adversary emulation. So external and of soon-to-breach red team engagements, collaborative purple teaming work. We do incident response tabletops. We do ransomware simulations, physical breaches, social engineering, and we even help companies build their own internal red team programs. As for red teaming in general, red teaming is a concept that has a lot of different meanings. It can mean just applying adversarial thought. And in that sense, you can red team an idea through debate or analysis to pressure test it. In fact, you on your podcast are often debating ideas and things. So essentially, you're red-teaming an idea there. But with respect to the industry, it's one of many different forms of offensive security testing that's used to find security gaps and to test an organization's resilience to actual breach scenarios. In other words, if we do things just as an adversary would, would that organization, first of all, detect those things and properly respond to them? And secondly, are they and possible to do within the environment, or are there protections in place to prevent that? So, in its most simple form, I refer to it as objective-based adversary emulation. Objective-based in the sense that a lot of offensive security forms and methodologies start from the standpoint of a scope. It might be a network range or an IP address or even a web application or something like that. Whereas red teams actually start with attacker trophies and objectives. And those objectives could be to perform an unauthorized wire transfer or to get an ATM to spit out cash and to delete a mortgage, or to spy on an executive board room or even to stop a train. In fact, all of those are actual trophies that my different teams through the years have had in past red team engagements. And by starting with that objective, you detach from a very specifically defined scope. There might be ways to perform that wire transfer outside of the traditional Swift or system that we're looking at. In other words, we could social engineer wire operators. We could get access to a wire room and those types of things. So red teaming allows you to kind of open up the avenues of creativity on that. And an adversary emulation just means we're doing things just as an adversary would, as closely as possible to their playbooks, using their same tactics, techniques, and procedures. And that way a customer can see how they do against the real thing, except that, unlike that nefarious attacker, we're actually sharing all of our notes to help them to actually get better and to get stronger. So in a way, we're kind of our customer sparring partner to help them prepare for the real site.

Dave Bittner: Can you give us some examples of the kind of things that happen as you gear up for an engagement here? I mean, let's say I'm a medium-sized business, maybe a few dozen employees, something like that. Where do you begin? How do you decide what sort of things you're going to do?

Trevin Edgeworth: That's a really great question. A lot of organizations will come to us and say, "We know we want a red team. We just -- we could use some help kind of understanding how to craft that scenario." And that's where I start to lick my chops because I come from more of a threat intel background and red team. And I can try to ask a lot of questions around what's their business and what are they -- you know, how are you structured and how do you engage your customers and interface with other outsiders? And I can kind of put the hat of an attacker, of a nefarious attacker on and say these -- and also try to understand the available strategic threat intelligence around those particular industry verticals, say it's somebody coming from manufacturing, I can go and check and see what attackers are actually targeting manufacturing and from what countries, and what cybercrime groups and things, and what are their TTPS. And then I can kind of translate that into, here's what we recommend that you are most likely susceptible to or most likely going to be in the crosshairs against. And so, we will kind of talk through that scenario, and the customer will give us a thumbs up, or maybe we can think this through a little further. So, it really is customized for every individual. But one of the key ways that we do reach an idea as to a really strong operation ideas through a process called architecture and attack graphing. So we'll kind of understand with the customer all of the core technologies that they have in use, are they on-prem or are they in the cloud? What do they use for their email gateway? What are the technologies that they use on their endpoints and for authentication for MFA? And then we translate that into attack graphing, which is these are the top five, six, seven, 10, 20 scenarios where an outside attacker or a malicious insider are likely to target your unique business. And it's highly tailored towards every customer. And through that, we can really kind of zero in on scenarios that are most effective for them to test out.

Dave Bittner: What are some of the things that you would consider to be low-hanging fruit? You know, the things you run into time and time again, you know, if only most companies did this, then we wouldn't have some of the issues that we have.

Trevin Edgeworth: Oh boy, this is a wonderful question, and I'm glad you asked it is that surprisingly, even though specific findings often change from one operation to another, from one customer to another. But after having done this for many, many years, you wouldn't believe how often they fall under the same five or six different concepts. And one of them is the lack of micro-segmentation within organizations. They tend to -- if you get access, an entry point into any part of the network, it's just too easy to move from one point to the other. Number two would be the lack of secrets management. So, often customers and developers, and others will leave credentials in insecure locations like Confluence or GitHub or different places where they can easily be compromised or even on Excel spreadsheets on a machine. Another one of those is elevated user access rights. I would think if you don't need access to do a certain thing, that the company would not want to actually give you that. But you wouldn't believe the number of people that have local administrator rights, full administrator privileges of their own machines, and they just don't need it. There are things that they can do to kind of bump that down. And then I would also say, lack of detections around, you know, security incidents are often, you know, things that we really should be caught on are just where we're able to kind of stay under the radar through things like living off the land, looking like a normal user, and doing those types of things. So those are kind of the key, I would say four areas that organizations kind of seem to struggle with and they enable legitimate attacks as well as red team attacks to happen and succeed.

Dave Bittner: And what about the human side of things, you know, when we talk about things like security awareness training and, you know, that kind of stuff, what part does that play from your point of view?

Trevin Edgeworth: Oh, wow. You know, it's actually one of my absolute favorite. If I have a favorite area of red teaming, it is that human element. Throughout my career, I've had an opportunity to analyze a lot of the breaches on the CKI side, on the cyber threat intelligence side, and actually performing breaches legally and ethically on the red team side. And so, I've gotten to see a lot of real-world attack chains and breach scenarios that are played out. And I've always been particularly fascinated by attacks that, number one, involve a social element to them. And number two, involve some form of timing to them or capitalizing on a certain window of opportunity that's based on what's happening either at the target organization or in the world around them. Because one of the most essential skill sets for red teaming is having situational awareness and knowing that target environment, and using what each environment is giving to you in terms of the attack surface and potential pathways. So, you want to understand what is this business. How do they do business? How do they interact with customers and third parties? And that is important. But sometimes what that environment is giving you is not so much the technical in nature, what operating systems they're using or what, you know, tools and things they're using, but rather more event-based and more situational that are based on changes or key things that are happening in that world or at the target environment, so from a business IT or other standpoint. For example, maybe the timing of a major press release or an earnings report, or maybe it's a change to a new work-from-home or a return-to-work strategy in that company that might be advertised publicly. It could even be something that's focused on kind of a peak busy period or a particularly quiet time like a major holiday at the organization or even world events. There are often events and change can create windows of opportunity for attackers, sometimes even more so than a target's technology or people or processes do.

Dave Bittner: How often does it come to pass, if at all, where, you know, you engage with an organization -- and I don't know, I'm imagining, you know, as a security personnel somewhere thinking that, oh, you know, our regulatory regime says that we have to have these tests but this is a checkbox thing and, you know, have at it, you know, good luck to you, we're buttoned up tight? And then they're in for a little rude awakening.

Trevin Edgeworth: Yeah, exactly. Yeah, you're exactly right. I call that bottom-up security as opposed to kind of top-down security. Bottom-up security is maybe a CISO listening on this podcast is this may resonate a little too closely to them. But they're going up and down the vendor aisles of the conference and they're saying, "Hmm. Let's keep this on the -- let's -- yeah, I want this. I want this." They're like a kid in a candy shop and realizing I need all these tools, right? And I mean, well, that's something we haven't thought of let's do that. And -- or they're starting from a standpoint of regulatory, we need to be secure in the ways that PCI says that we're secure. And PCI is not evil. It does actually help us to check certain boxes, but it does not equate necessarily to security. Now, top-down security, that CISO would understand, would have a strong understanding of, number one, their business, how it operates, and what their crown jewels are for each of their specific lines of business. That might be for a hospitality company, it might be their reservation management system, which maintains PII and PCI data across all of their customers as well as the reservation data. It might be different types of systems like your active directory and things. And then together you would actually think through, based on that information, as well as what threat actors are targeting you. Again, you can do that through the process of operation planning, like with the red team. You can then understand these are the most likely threat scenarios in our risk management process that are likely to hit us and likely to be impacted by us. And therefore, you can figure out what controls are needed in order to stop each and every one of those. That is proper top-down security leadership. And those are usually the organizations that stand particularly well against the real thing as well as red team operations.

Dave Bittner: What sort of advice do you have for our listeners? You know, folks who are just your regular everyday person going about their day, minding their business, doing the things they do online. The experience that you have as a red teamer, how do you protect yourself? How do you advise members of your family, your loved ones, for the types of things that they can do that aren't overwhelming?

Trevin Edgeworth: Sure. Yeah, you know, when it comes to social engineering attacks specifically, we all have this internal gauge. And I don't know about you, but I love -- I used to love gaming. I still do a little bit of gaming. I fit it in where I can. But I used to play a lot of that original Far Cry. And in Far Cry, there's this concept, I think they call it a stealthometer or a stealth meter. And it's this meter on the screen that as you get closer and closer to an enemy, or maybe you're exposed out in the open in the sunlight, or you make a noise, well, that stealthometer changes from green to yellow to red, and then before you know it, your enemy is kind of calling for backup and now it's 5 against 1. And so, I've often, you know, had a sort of similar image to that in my head when I'm doing social engineering that your targets all have the "something's wrong" meter. And the attacker's pretext either assuages that meter or nudges it more towards the red. And so, for an attacker truly looking to evade detection, it's about subtly including key bits of information that might suggest that you're safe, that you're an insider, that you can be trusted, and that you're not a threat. And so, offering little bits of information really can be any number of things like it can be knowing specific terms that your organization or your, you know, the communities that you're involved in use. It might be knowledge of the internal environment or your home situation or, you know, what your family's up to and those types of things. And if you can subtly weave in two or three of those kinds of things, people initially will start with a high stealthometer in the red or yellow, and they eventually lower their guard. So my advice for people is really to just constantly be vigilant and thoughtful around why is this person contacting me. And just to realize that things can seem very, very real, but you still need to continue to ask the questions as to why did they contact me and not me -- them. For example, a bank reaching out regarding a weird transaction or something. And so it's just -- I think it's just having -- heeding that stealthometer in general, because generally that's going to be spot and it's going to help us to understand something's not right, stop, pause, and ask questions, and reach out to somebody who may be more well versed in some of these types of scenarios to help you out. [ Music ]

Dave Bittner: Joe, what do you think?

Joe Carrigan: Dave, big organizations are always going to be targets for malicious actors. Small organizations are also always going to be targets for malicious actors. The only difference with the small organization is you actually stand less of a chance to get targeted. But when they do target you, you may not be prepared for them. So, think about that. If you're a small organization, think about it. It's interesting that the SEC and Mandiant both had been -- had their Twitter account or X accounts -- we just say X now.

Dave Bittner: I say X/Twitter because to me that's clearer.

Joe Carrigan: Because you're an X/Twitter user, right?

Dave Bittner: I am an X/Twitter user, yes. I say X/Twitter because it's X and Twitter and it's ex-Twitter because it's no longer Twitter. And just using the word X, it is horrible, especially in the spoken word. So I think X/Twitter provides more clarity for our listeners, so that's what I've decided to go with.

Joe Carrigan: Well, I'm going to go with that too, Dave.

Dave Bittner: All right.

Joe Carrigan: I'll keep up with the journalism standards.

Dave Bittner: There you go.

Joe Carrigan: The SEC takeover was done with a SIM swap. And X/Twitter blamed the SEC for this. And I kind of have mixed feelings about this. Maybe -- you know, maybe they should have gone -- they should not have been -- the SEC should not have been using a cell phone for multi-factor authentication here. I would think that if I were in charge of the SEC, I think I would have used some kind of universal two-factor method of multi-factor authentication here. I think that's warranted for the SEC. I think it's warranted for any Fortune 1000 company, any large company, any publicly traded company, I think it's warranted for. I think it's such a low-friction thing. Just put it on all your accounts. I remember this. So, this was a very effective attack because what they did was they went out and announced that the SEC was going to allow a Bitcoin ETF. And that drove the price of Bitcoin very quickly up a lot. So, somebody made a ton of money here. And that was the plan all along. They bought some Bitcoin, announced the -- made the announcement through the SEC's Twitter or X/Twitter page, and then sold the Bitcoin. Mandiant had their account taken over and they ran some other kind of crypto scam where they impersonated a crypto wallet. Probably didn't get as much. They didn't get as much, I don't think, but I think the SEC attack was almost certainly way more profitable.

Dave Bittner: Yeah, although I'd say you can't discount the amount of reputational harm that Mandiant got for having their --

Joe Carrigan: Yes, because Mandiant is a cybersecurity company.

Dave Bittner: Right, right.

Joe Carrigan: So, yeah, I mean --

Dave Bittner: Somebody had a very bad day at Mandiant when that happened.

Joe Carrigan: Yes, the very bad day scenario.

Dave Bittner: Yeah.

Joe Carrigan: I like the term that Trevin uses, "objective-based adversary emulation." And objective-based is a good way to think about threat modeling. This reminds me of Schneier's attack trees, which in that you think about what's the goal, right? And then what would be necessary to accomplish that goal, all the way down to the end unit? You just build trees out to what you have to do in order to compromise the end goal. So, it's -- you know, you start with the objective and you think about how to get there, essentially. This is what bad actors are going to do. And then there's the adversary emulation. These actors are going to do anything they can to get to the objective, so emulate that. Now, when I say that, I think that there are certain places where you don't emulate what they're going to do. Right? Like I don't think in the phishing exercises, you say big bonus coming, click here.

Dave Bittner: Right.

Joe Carrigan: All that does is alienate your employees. So, you know, temper this with common sense. And I don't know how to quantify that.

Dave Bittner: Common sense and humanity.

Joe Carrigan: Right. Humanity. That's right.

Dave Bittner: Right, right.

Joe Carrigan: It's important to -- I think it's important to understand how adversaries view you. And I think that having somebody who's skilled in adversarial thinking really helps with that.

Dave Bittner: Okay.

Joe Carrigan: something like a white hat person who thinks adversarily.

Dave Bittner: Yeah.

Joe Carrigan: I think it's interesting that when, when Trevin is talking about the four different problems that they just keep seeing over and over and over again, like no micro-segmentation on your network, no secrets management, and that sometimes these secrets are just laid out on GitHub. I can't think of a worse place to keep a secret.

Dave Bittner: Right.

Joe Carrigan: The elevated user access. Local admin is not necessary for most users. You might need -- if you have a development team, they might need to have local admin. If you have an accounting team, they don't need to have local admin, and they probably shouldn't. And then lack of detection technologies. Detection is one of the big parts of knowing that there's a problem. This is why -- why this is such an issue is the average amount of time that an attacker stays in a network before he's detected is months, months of time. And the amount of damage that an attacker can do in that time is astronomical.

Dave Bittner: Yeah.

Joe Carrigan: So, it's very important to be able to know when they're in. Finally, I wanted to talk about the interesting conversation you guys had around the issue of timing, timing is key in a lot of these attacks. If someone is targeting your organization, you can bet they're watching every press release that you release, that comes out. They have a subscription to PR Newswire, they're watching all the news reports about you, they know what's going on, and the first thing they're thinking about all these things is how can I exploit this article or this press release or this fact, how can I do that? That's the first thing they're thinking about. So timing can seem like -- can -- really helps make it look like a phone call might make sense. But I don't know, I think people need to think, this seems a little bit suspicious. The timing's a little on the nose. That being said, I don't know that you're going to be able to get most people to think that way.

Dave Bittner: Yeah.

Joe Carrigan: It's kind of difficult. So just understand, I think right now all I can say is understand the timing, understand the timing works against you in these kinds of situations. If somebody -- if there's a big press release that your company's just laid out, maybe you tell people, "Let's watch out for scams around this subject."

Dave Bittner: I think it's an interesting conversation you could have, let's say in a tabletop exercise or an executive meeting to say, "Hey, let's brainstorm what are the -- let's look at our year, right, our company's year. What are the things that we do throughout the year, timing-wise, where we would be at our most vulnerable, and why?" And just have people think about that. Is it the release of our numbers? Is it the release of a new product? Is it -- who knows? It could be a million -- every company could have millions of different things. But thinking about that, making a list, you know, I think could help bring the awareness that organizations need to maybe try to combat that. It's an interesting idea.

Joe Carrigan: Yeah, it's an interesting problem that I don't know that we've talked about this exact issue ever before.

Dave Bittner: Yeah.

Joe Carrigan: But yeah, thanks for bringing it up, Trevin.

Dave Bittner: Yeah. And our thanks to Trevin Edgeworth for joining us. Again, he is the Red Team Practice Director for Bishop Fox, and we do appreciate him taking the time. [ Music ] That is our show. We want to thank all of you for listening. Our thanks to the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. A quick reminder that N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our executive producer is Jennifer Eiben. This show is mixed by Tré Hester. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Joe Carrigan: And I'm Joe Carrigan.

Dave Bittner: Thanks for listening. [ Music ]