Hacking Humans 4.11.24
Ep 285 | 4.11.24

Could AI's doomsday be deferred?

Transcript

Dr. Robert Blumofe: You know, if you're a cybercriminal, you could make the case that November 30, 2022 was the greatest day in your life, because that was the day that OpenAI announced ChatGPT. And I'm not trying to throw ChatGPT in particular under the bus, but that was the day, the moment that set off generative AI mania. If you were a cybercriminal, your eyes lit up, and you realized that there is now this whole new type of tool that either is already available, or will soon be available to you to do some really dramatic things.

Dave Bittner: Hello, everyone, and a warm welcome to the Hacking Humans Podcast, brought to you by N2K CyberWire. Every week we delve into the world of social engineering scams, phishing plots, and criminal activities that are grabbing headlines and causing significant harm to organizations all over the world. I'm Dave Bittner, and joining me is Joe Carrigan from the Johns Hopkins University Information Security Institute. Hey, Joe.

Joe Carrigan: Hi, Dave.

Dave Bittner: We've got some good stories to share this week. And later in the show, my conversation with Robert Blumofe. He's executive vice president and chief technology officer at Akamai. All right, Joe, we are going to jump right into our stories here this week.

Joe Carrigan: Excellent.

Dave Bittner: And I am going to lead things off.

Joe Carrigan: Please do, Dave.

Dave Bittner: So this is something that sort of came by one of my feeds and it caught my eye. I thought it was something that would be good for our show. Let me start off by asking you, how would you define "stupid?"

Joe Carrigan: Ah, how would I define "stupid"? Good question. I actually have a definition to this, okay? There is an old saying that -- I think it was Einstein that said, "Doing the thing -- same thing over and over again and expecting different results is insanity." Yes.

Dave Bittner: Okay; I remember that.

Joe Carrigan: And my response to that is no, that's not insanity. Insanity is running up to people in a banana suit and kissing them, and doing all kinds of other crazy stuff.

Dave Bittner: [Laughs] Okay.

Joe Carrigan: Right? So what Einstein was talking --

Dave Bittner: So -- oh okay.

Joe Carrigan: About there is just the inability to learn, or its -- as it's known by its more common phrase, "stupidity". So it is -- that is what doing the same thing over and over again and expecting different results is.

Dave Bittner: Hmm.

Joe Carrigan: And whenever I do something and the same thing happens again and I'm surprised by it, the first thing I say is, "Well, that's the stupidity."

Dave Bittner: [Laughs] Okay. Well, I mean, your definition sort of aligns with what I'm going to be talking about here today. So there is a podcast called the "Knowledge Project Podcast".

Joe Carrigan: Okay.

Dave Bittner: It's hosted by Shane Parrish. And Shane posted an excerpt from a recent episode. This was with someone named "Adam Robinson", and he is a well-known author, educator, entrepreneur, and a hedge fund advisor --

Joe Carrigan: Hmm.

Dave Bittner: So rich guy. [Laughs]

Joe Carrigan: Rich guy, yes; guy with more money than he knows what to do with.

Dave Bittner: [Overlapping] hedge fund. He cofounded the Princeton Review, and actually his book on the test preparation industry was a New York Times bestseller. He is a rated chess master, so not just a rich guy, also smart guy.

Joe Carrigan: Yes.

Dave Bittner: [Laughs] He was mentored by Bobby Fischer.

Joe Carrigan: Really?

Dave Bittner: Yes. >> Well, that's impressive. Yes. So he did his undergrad at the Wharton School of the University of Pennsylvania and has a law degree from Oxford, so you know, all around smart person.

Joe Carrigan: Yes.

Dave Bittner: [Laughter] Now, he was invited to speak at an investment conference in the Bahamas, and he chose his topic to be "How Not to be Stupid".

Joe Carrigan: Mmm.

Dave Bittner: I should probably have attended that talk. [Laughter] So why I thought this was interesting for us is because we say many, many times on the show, we talk about how when people find themselves having fallen victim to some sort of scam, usually one of the first things they say is, "I feel stupid."

Joe Carrigan: "I feel stupid," right.

Dave Bittner: Right; and our reply is, "Don't. You're not stupid, you're -- "

Joe Carrigan: "Human."

Dave Bittner: Right; [laughs] exactly. So I thought this aligning with that notion, that impulse that people have, that feeling of feeling stupid --

Joe Carrigan: Right.

Dave Bittner: When they have fallen prey to some kind of a scam. And so Robinson, Adam Robinson, did this presentation, structured this presentation about how not to be stupid. And he defined "stupidity" as, "Overlooking or dismissing conspicuously crucial information."

Joe Carrigan: Hmm.

Dave Bittner: And that kind of aligns with what you were saying, you know, doing the same thing over and over again and expecting a different result.

Joe Carrigan: Right.

Dave Bittner: Right, that's conspicuously crucial information. [Laughs]

Joe Carrigan: I would say yes.

Dave Bittner: Right, I mean it's not -- they're completely off base from each other.

Joe Carrigan: Yes.

Dave Bittner: But he seven factors that he says contribute to stupidity.

Joe Carrigan: Okay.

Dave Bittner: I'm going to list them here.

Joe Carrigan: Oh, I want to hear them.

Dave Bittner: Being outside one's normal environment or changing routines.

Joe Carrigan: Ah.

Dave Bittner: Okay?

Joe Carrigan: Okay.

Dave Bittner: Presence in a group; group think.

Joe Carrigan: Yes, group -- being present in a group will make you stupid.

Dave Bittner: The presence of an expert or being an expert one's self.

Joe Carrigan: [Laughs] So wait a minute, wait a minute; so the presence of an expert --

Dave Bittner: Yes.

Joe Carrigan: Makes you stupid.

Dave Bittner: Could --

Joe Carrigan: And being an expert.

Dave Bittner: These are things that contribute to stupidity; stupid behavior.

Joe Carrigan: [Overlapping], right? Okay.

Dave Bittner: Yes. Being an expert one's self. I think that's interesting because perhaps it's coming from overconfidence.

Joe Carrigan: Yes.

Dave Bittner: Okay.

Joe Carrigan: Yes.

Dave Bittner: Tasks requiring intense focus.

Joe Carrigan: Hmm.

Dave Bittner: This makes sense to me.

Joe Carrigan: Yes, I would agree with that one; yes.

Dave Bittner: Information overload.

Joe Carrigan: Right.

Dave Bittner: Physical or emotional stress of fatigue.

Joe Carrigan: Ah, 100%, yes.

Dave Bittner: And urgency or rushing.

Joe Carrigan: Aha. [Laughter] We hear that a lot, don't we?

Dave Bittner: Right. So I would say, I don't know, half of these really fall into things that we talk about all the time.

Joe Carrigan: Right.

Dave Bittner: And what he says -- what Robinson says is that these factors are additive and can significantly increase the risk of errors.

Joe Carrigan: That is interesting.

Dave Bittner: Yes. One of the examples that he used was in hospitals, particularly here in the US, that all of these factors are present in hospitals, and actually there are a lot of accidental deaths here in the US --

Joe Carrigan: Yes.

Dave Bittner: As a result of many of these things.

Joe Carrigan: Yes, it's like the third leading cause -- third, fourth leading cause of death --

Dave Bittner: Yes, exactly.

Joe Carrigan: In the country?

Dave Bittner: Right, just medical mistakes.

Joe Carrigan: Medical mistakes, right.

Dave Bittner: Yes. He provides another example, he has an anecdote about Yo-Yo Ma, the famous cellist --

Joe Carrigan: Yes.

Dave Bittner: Who was on his way to a performance one time and he was running late, and he took a cab to the -- let's just say it was Carnegie Hall --

Joe Carrigan: Right.

Dave Bittner: And sprints out of the cab to go do the performance and left his cello.

Joe Carrigan: Leaves his cello. [Laughter] I've heard this story about Yo-Yo Ma.

Dave Bittner: Left his cello. Like and the anecdote is how could that be possible; right, how could it be possible that Yo-Yo Ma, a professional cellist, right, [laughter] who's probably spent more time with that cello than his mom, right?

Joe Carrigan: Right.

Dave Bittner: [Laughs] Right, like how could he possibly leave his cell behind?

Joe Carrigan: I totally empathize with Yo-Yo Ma on this.

Dave Bittner: Yes.

Joe Carrigan: Because I have left so many things behind.

Dave Bittner: [Laughs] Right.

Joe Carrigan: And it -- my fifth grade teacher, Helen Norris, would say to me, "You would forget your head if it wasn't attached."

Dave Bittner: Yes. Yes. [Laughs]

Joe Carrigan: Yes.

Dave Bittner: So this conversation -- again, this was a podcast, and we'll have a link to it in the show notes; what they really talked about was careful decision-making and the risks of multitasking, and also overloading your cognitive capacities. One of the interesting things they noted here was kind of an aside, but it really caught my attention was how like if you're trying to concentrate on something -- I know this is true for me.

Joe Carrigan: Right.

Dave Bittner: If I'm trying to concentrate on something, like I'll turn off the radio.

Joe Carrigan: Yes.

Dave Bittner: Like let's say I'm in the car --

Joe Carrigan: Right, yes.

Dave Bittner: You know, and I'm coming up on a difficult intersection or something like that.

Joe Carrigan: [Laughs] I will turn the radio off for that, too.

Dave Bittner: Yes. Yes.

Joe Carrigan: Whenever I'm looking for the house number, I've got to turn the radio off.

Dave Bittner: Right; right. And it talked about how merely having a passenger in your car greatly increases the odds of you having a car accident --

Joe Carrigan: Yes, I'm sure it does.

Dave Bittner: Because you'll be distracted. They also talked about the difference between having a passenger in your car and having someone on the phone in your car, and having someone on the phone is way more dangerous than having a passenger.

Joe Carrigan: Really.

Dave Bittner: Yes; because the passenger being there with you can sense when you are in a situation that requires decision-making, right? You're coming up on a tricky intersection or --

Joe Carrigan: Yes.

Dave Bittner: You know, it's raining hard, or something like that. But someone on the phone is clueless --

Joe Carrigan: Right.

Dave Bittner: That those things may be happening, and they just keep, "Blah, blah, blah, blah," they keep talking to you, pulling away your attention and it --

Joe Carrigan: Right, distracting you.

Dave Bittner: Makes it much more dangerous.

Joe Carrigan: Right. So --

Dave Bittner: We've had -- go ahead.

Joe Carrigan: Doctors Lee and Dahbura at Hopkins did a research project on the effectiveness of distraction on phishing emails.

Dave Bittner: Ah.

Joe Carrigan: And they found out that, yes, distracting people really increases the effectiveness of a phishing email.

Dave Bittner: Right; right. Well, and it may -- again, if we look at this list that they put out here, I mean, some information overload, and that's what the scammers do.

Joe Carrigan: Right.

Dave Bittner: Physical or emotional stress, that's what the scammers do --

Joe Carrigan: That's what the scammers do.

Dave Bittner: Right?

Joe Carrigan: Right.

Dave Bittner: They get you to do things, they -- and of course there's always that urgency or rushing saying, you know --

Joe Carrigan: Yes.

Dave Bittner: "You need to this now."

Joe Carrigan: And they make you focus on it and they -- yes the urgency and then they are the expert on things.

Dave Bittner: Right.

Joe Carrigan: Right, like we have that story about -- I can't remember her name anymore, but the reporter who gave away $50,000 in a shoebox.

Dave Bittner: Right.

Joe Carrigan: That guy portrayed himself exactly as an expert.

Dave Bittner: Right.

Joe Carrigan: "If you do that, I can't help you anymore."

Dave Bittner: Yes.

Joe Carrigan: Right.

Dave Bittner: Yes. So I really thought this was interesting and it had an interesting crossover with the kinds of things we talk about here every day, so --

Joe Carrigan: Yes.

Dave Bittner: Again --

Joe Carrigan: Sure does.

Dave Bittner: The -- it's actually a blog post based on a podcast. [Laughter] So we're going to link to the blog post. It's called "How Not to be Stupid", and then there are links in there if you want to actually check out the conversation. There's a lot more to this conversation, but this little excerpt here really was right up our alley, so I highly recommend you check it out. Again, it's the Knowledge Project Podcast, hosted by Shane Parrish, and the guest was Adam Robinson. All right, that's what I have this week. Joe, what do you have for us?

Joe Carrigan: Dave, my story comes from a listener who sent this in, named Michael.

Dave Bittner: Mmm.

Joe Carrigan: It was a story by Hayley Compton at the BBC.

Dave Bittner: Okay.

Joe Carrigan: And it goes like this, Dave, there's a lovely couple named Lucy and Christ who are both teachers in Derbyshire.

Dave Bittner: Okay.

Joe Carrigan: Do you know where Derbyshire is?

Dave Bittner: I do not.

Joe Carrigan: Me neither. I don't know. It's in England somewhere.

Dave Bittner: Yes.

Joe Carrigan: But they had recently had a baby, and Chris realized, as one does when you have a baby, that you don't have much more time for veggie games [phonetic], Dave.

Dave Bittner: [Laughs] Veggie games?

Joe Carrigan: Veggie games.

Dave Bittner: Okay.

Joe Carrigan: That's how [inaudible 00:11:24] says videogames.

Dave Bittner: Okay. [Laughs]

Joe Carrigan: So Chris said, "I'm going to put my gaming laptop and I'm going to sell it. You know, we could use the money. I don't use this anymore. We're going to sell it."

Dave Bittner: Okay.

Joe Carrigan: So they put it on Facebook Marketplace, and they get a hit from this guy --

Dave Bittner: Yes.

Joe Carrigan: And he asked all the right questions about it, right, like what's the graphics process [inaudible 00:11:43]?

Dave Bittner: Okay.

Joe Carrigan: How much RAM do you have? I want to hear about it.

Dave Bittner: Hmm. Right; so knowledgeable inquiries.

Joe Carrigan: Knowledgeable inquiries.

Dave Bittner: Yes.

Joe Carrigan: Lucy did some OSINT, right?

Dave Bittner: Okay.

Joe Carrigan: Open-source intelligence gathering, and she poked around his Facebook profile, found pictures of this guy with his wife and his kids, showed where he lived, where he worked. Everything looked great. So he comes over to their house and they say, "You want a drink," and he's like, "Yes, I'll take a drink." And he interacts with the baby, you know, with the three-month-old baby --

Dave Bittner: Hmm.

Joe Carrigan: And they agree on a price of 700 pounds.

Dave Bittner: Hmm, okay.

Joe Carrigan: That's --

Dave Bittner: What's that in real money, Joe?

Joe Carrigan: I don't know, Dave, I don't know, it's -- [Laughter] you know, I keep -- whenever I hear about this stuff I say, "Just reference our communication from July 4, 1776. [Laughter] I don't care about that anymore.

Dave Bittner: I see. Very nice.

Joe Carrigan: I don't care how much pound is.

Dave Bittner: Very nice.

Joe Carrigan: There I am --

Dave Bittner: Very nice.

Joe Carrigan: Doing the ugly American thing again. [Laughter] I'm really good at that.

Dave Bittner: They're going to meet you at the border one day --

Joe Carrigan: Right. [Laughs]

Dave Bittner: And say, "Mr. Carrigan, you can turn right back around --

Joe Carrigan: Right.

Dave Bittner: And head back to --

Joe Carrigan: "You're not welcome in Canada."

Dave Bittner: Your Yankee Doodle," yes.

Joe Carrigan: Yes. "Yankee Doodle", [laughter] a song made up to make fun of us; right. So this guy opens up what looks like a banking app from a well-known bank --

Dave Bittner: Hmm.

Joe Carrigan: And Chris sees the main page and they see the transformation -- or the transfer page --

Dave Bittner: Yes.

Joe Carrigan: The transfer confirmation page, and Chris actually enters his banking details and the guy shows him a confirmation page. And there's a picture of this confirmation page in the article.

Dave Bittner: Okay.

Joe Carrigan: Chris apparently took a picture of it. Fifteen minutes later, the funds had not arrived in the bank, but now the baby's getting hungry, and Lucy does not want to nurse the baby with a stranger in the house --

Dave Bittner: Okay.

Joe Carrigan: Which I totally understand.

Dave Bittner: Sure.

Joe Carrigan: So they say, "Okay, we'll just wait for the money to show up. Have a good day."

Dave Bittner: The guy goes on his way.

Joe Carrigan: The guys leaves.

Dave Bittner: Okay, yes.

Joe Carrigan: Later that day, the money still hasn't shown up, right?

Dave Bittner: Oh.

Joe Carrigan: So they call the guy, and their number is blocked.

Dave Bittner: Oh.

Joe Carrigan: They're blocked on Facebook as well.

Dave Bittner: Hmm.

Joe Carrigan: What I find most disconcerting about this is they reported the crime to their bank, to the insurance company, to the police, and to Facebook, but they say no one investigated. Now, it's a 700-pound laptop.

Dave Bittner: Yes.

Joe Carrigan: And by that I don't mean you can't lug it around. [Laughter] It's not a really -- it's an old Kaypro.

Dave Bittner: Right. [Laughter]

Joe Carrigan: It's an Osborne 1.

Dave Bittner: There you go; right, sure. [Laughter]

Joe Carrigan: So the police say that they have just willingly given away their property. That's the police determination; not that this guy came in and essentially stole a laptop from them.

Dave Bittner: Right, he committed fraud.

Joe Carrigan: He did, he committed fraud.

Dave Bittner: Yes.

Joe Carrigan: They're like, "No, you gave away your property." They didn't lose any money, so the bank can't really help them. They can change their account number, because he did enter his account number into this app.

Dave Bittner: Yes; right.

Joe Carrigan: Facebook, the ever helpful Facebook, Dave. [Laughter] They say, "We need the guy -- we need the link to the guy's profile." Well, guess what, they can't get it, because he's blocked them --

Dave Bittner: Right.

Joe Carrigan: Right? Facebook can get this information. Facebook has the information.

Dave Bittner: Sure.

Joe Carrigan: Facebook knows the marketplace, knows the conversation, they see -- they could see -- they can easily go through and find out everybody that conversed with him about this item and see who has blocked him. It's all in their database. It's all there. They could find this guy.

Dave Bittner: Sure.

Joe Carrigan: No problem. [Overlapping] --

Dave Bittner: Facebook probably has his GPS coordinates. [Laughs]

Joe Carrigan: Right. They probably -- yes.

Dave Bittner: At any moment --

Joe Carrigan: They --

Dave Bittner: They know whether he's straight or gay, right?

Joe Carrigan: They know where he lives. Yes, they know where he lives.

Dave Bittner: Exactly. They know what his favorite pizza toppings are; they know --

Joe Carrigan: Yes.

Dave Bittner: [Laughs] So much about this guy, right?

Joe Carrigan: Sure. "Sorry, we can't help you."

Dave Bittner: Right. [Laughs]

Joe Carrigan: But one thing the cops did, the ever helpful cops here, the police said, "You should call Action Fraud to help avoid this in the future.

Dave Bittner: Okay.

Joe Carrigan: And Action Fraud had this to say about -- they -- the BBC had reached out to Action Fraud, which is a UK scam prevention service --

Dave Bittner: Okay.

Joe Carrigan: Right; and you can report your scams there. I don't know that they have a lot of enforcement ability, but they can track things. It's kind of like the Internet Crime Complaint Center, but I think it's more proactive than that.

Dave Bittner: Okay.

Joe Carrigan: I don't -- I'm not really sure.

Dave Bittner: All right.

Joe Carrigan: But they said that in 2019 they received just under 5,000 reports of Facebook Marketplace scams; last year, 20,000.

Dave Bittner: Yes.

Joe Carrigan: Twenty thousand of these scams.

Dave Bittner: Wow.

Joe Carrigan: There's a quote from Nick Stapleton, who is a copresenter on the BBC's Scam Interceptors. I don't know if that's a show or a podcast, or what.

Dave Bittner: Yes.

Joe Carrigan: But Nick had this to say. He says, "Behave online as you would in real life. Assume that you're dealing with someone you don't know. Presume that they are not trustworthy until proven otherwise. Facebook Marketplace is an add-on to an existing social media site. You need to treat it like the classified ads of a newspaper. You have no idea who has listed that advert."

Dave Bittner: Right.

Joe Carrigan: So just because you see their picture or just because you see pictures of their wife and kids, there's no guarantee of authenticity there.

Dave Bittner: Sure.

Joe Carrigan: It's -- it could be a completely fabricated profile. And for all we know, this guy just shut his profile down and deleted all of his data from Facebook, right? No, but they still have it. [Laughter] But --

Dave Bittner: Count on it.

Joe Carrigan: Yes. Maybe that's why they're not helping is because --

Dave Bittner: Right.

Joe Carrigan: They might have to tell you this.

Dave Bittner: The reveal, yes.

Joe Carrigan: So here's the quote from Meta, because I love these quotes from Meta.

Dave Bittner: Sure.

Joe Carrigan: "We don't want anyone to fall victim to these criminals, which is why our platforms have a system to block scams.

Dave Bittner: Yes.

Joe Carrigan: People can report this content to -- in a few simple clicks, and we work with the police to support their investigations," which means we don't work with you to support the investigations.

Dave Bittner: Right.

Joe Carrigan: We'll work -- if the police send us a warrant, we'll give it to them.

Dave Bittner: Right, exactly. [Laughs]

Joe Carrigan: Right.

Dave Bittner: Right.

Joe Carrigan: "We have a trained team of reviewers who check these reports 24/7 and move quickly to remove content or accounts which violate our guidelines." It sounds exactly like this was copied and pasted out of something into a response --

Dave Bittner: Sure.

Joe Carrigan: To the BBC.

Dave Bittner: Yes.

Joe Carrigan: This is --

Dave Bittner: It's also the biggest load of crap I've ever heard in my life, Joe.

Joe Carrigan: It is. Thank you, Dave. [Laughter] Thank you. I am -- I'm so glad you said that, because that is exactly what I thought when I read this. You don't care. Facebook does not care --

Dave Bittner: No.

Joe Carrigan: If you lose a $700 laptop because somebody exploited their system.

Dave Bittner: They do not, no.

Joe Carrigan: They don't.

Dave Bittner: No. I will tell you, you know -- and you know in the past year I got back on Facebook --

Joe Carrigan: Yes.

Dave Bittner: And it absolutely pains me to be there.

Joe Carrigan: You're loving it, right, Dave?

Dave Bittner: Ah. I think I might have said this last week, because someone I saw recently described, they said, "Facebook -- " I think they were -- actually it says, "Social media --

Joe Carrigan: Yes.

Dave Bittner: But Facebook in particular is like chemotherapy."

Joe Carrigan: Right.

Dave Bittner: Right, it has its purposes, but at its base it is poison.

Joe Carrigan: Right. [Laughs]

Dave Bittner: And I think that is true. And there are scams that I see come by every day on Facebook and Facebook does nothing about them.

Joe Carrigan: No.

Dave Bittner: They come by several times a day.

Joe Carrigan: No.

Dave Bittner: I --

Joe Carrigan: And --

Dave Bittner: Diligently reported them for a while --

Joe Carrigan: Yes.

Dave Bittner: And then I shifted my efforts from diligently reporting them to diligently hiding them, because I'm sick of seeing them, right? [Laughter] And that's what happens.

Joe Carrigan: Right.

Dave Bittner: I will say, my oldest son is very active on Facebook Marketplace. He does a lot of buying and selling.

Joe Carrigan: Right.

Dave Bittner: And he is the first person to say, "Cash only.

Joe Carrigan: Yes.

Dave Bittner: Cash only."

Joe Carrigan: "Cash only," yes.

Dave Bittner: None of these --

Joe Carrigan: Apps.

Dave Bittner: Payment apps, no, nothing.

Joe Carrigan: No.

Dave Bittner: "Cash only, and meet in a neutral location that is not your home."

Joe Carrigan: Right. Yes.

Dave Bittner: You know, here in the US a lot of local police stations have little places in their parking lot --

Joe Carrigan: Yes.

Dave Bittner: That are set up for people to do these sorts of exchanges, buying and selling --

Joe Carrigan: Yes.

Dave Bittner: So that it's a safe place, a monitored place. And if you got a crook, chances are they're not going to want to meet you at the police station --

Joe Carrigan: Right.

Dave Bittner: Right? [Laughs]

Joe Carrigan: That's right.

Dave Bittner: Right; so, "Meet me at the police station. Cash only." And you'll hear all kinds of excuses for, "Oh my gosh, I don't have any cash," "Listen, I -- " you know, "the ATM was broken. How about we just use this app here really quickly? Come on, I'm a good guy."

Joe Carrigan: Right, right.

Dave Bittner: And, "No, cash only."

Joe Carrigan: "Cash."

Dave Bittner: That's it.

Joe Carrigan: "Where's the cash? Show me the cash --

Dave Bittner: Yes.

Joe Carrigan: Or we're done here."

Dave Bittner: Yes.

Joe Carrigan: Yes, that's really the only way to do it; and that's really the only way to make yourself secure in these situations.

Dave Bittner: Right.

Joe Carrigan: And trust no one on Facebook.

Dave Bittner: No, you can't. I mean, even --

Joe Carrigan: Right.

Dave Bittner: People who claim to be your friends.

Joe Carrigan: Yes.

Dave Bittner: It might not be them.

Joe Carrigan: It might not be them. [Laughter] That's right. Your friends may have had their account compromised; right.

Dave Bittner: Right. Yes, just -- well look, I told you about the dog thing that I fell for --

Joe Carrigan: Yes.

Dave Bittner: A couple weeks ago. Another thing happened a couple weeks ago. I was like, "Oh -- " I said -- I was sitting on the couch with my wife and I said, "Oh, look -- " you know, "Uncle -- " so and so "has just sent me a friend request." She says, "Yes, that's a scam. You're already friends with him. I got one, too. Delete it."

Joe Carrigan: [Laughs] Right.

Dave Bittner: I'm like, "Okay." [Laughs] Oh my God. You know, and it's so maddening. Joe?

Joe Carrigan: Yes?

Dave Bittner: Makes me feel stupid.

Joe Carrigan: It does. [Laughter] I got one from somebody that's not related to me, but a guy I know who he is --

Dave Bittner: Yes.

Joe Carrigan: And I got another friend request from him, same profile pic.

Dave Bittner: Yes.

Joe Carrigan: And it was one of those, "Hey, I found -- I saw your name on the fund distribution page."

Dave Bittner: Hmm.

Joe Carrigan: And I said, "Really; interesting." I sent him a link to one of my talks with the attorney general. [Laughter] And he blocked me.

Dave Bittner: That's funny. [Laughter]

Joe Carrigan: The attorney general of Maryland; former attorney general of Maryland.

Dave Bittner: [Laughs] All right, well that is an interesting story, and we will have a link to the story in the show notes. Of course we would love to hear from you. If there's something you would like us to consider for the show, you can email us. It's hackinghumans@n2k.com. All right, Joe, it is time to move on to our "Catch of the Day". [ Reeling in fishing line ]

Joe Carrigan: Dave, our "Catch of the Day" comes from someone also named Michael, but not the same Michael. [Laughter] It's all Michaels all the way down today. But Michael sent this along, and it's a fairly standard "Catch of the Day". But what I wanted to share more importantly than the "Catch of the Day" is what Michael said about it.

Dave Bittner: Okay.

Joe Carrigan: He said, "I thought I would pass this along. I was under a lot of stress and surrounded by chaos when I received it. It really had me panicked, but I followed all the recommendations, slowed down, and started looking for the signs. Thanks for keeping us educated and the constant reminders that it can happen to any of us at anytime." So Dave, why don't you go ahead and read the geek tech memo that [overlapping] --

Dave Bittner: Yes; so I'm going to read it and then -- but there's a lot of stuff in here that deserves a second look, so I'm going to start it -- so there's a title, a header, a graphic that says, "Geek tech --

Joe Carrigan: Right.

Dave Bittner: Date, March 30th, '24, bill to Michael, receipt number. Hey Michael, your WinTech 24 plan renewal request has been started as your automatic renewal option has been activated. We have charged you $357.63 with your account for your plan renewal request. Item overview, product name, user, provider, validity, customer number, total $357.63. Order status confirmed. Mode of payment, online. You have 48 hours if you did not authorize this charge. Please contact us at -- to cancel the plan." And then there is a phone number.

Joe Carrigan: Right.

Dave Bittner: Now, let's go through this --

Joe Carrigan: Okay.

Dave Bittner: Because there are some interesting -- first of all, the date, did anything jump out at you with the date?

Joe Carrigan: It says, "March 30th, '24," not "2024".

Dave Bittner: Yes, that's one thing. What's the other thing? There's one other thing in there.

Joe Carrigan: Hmm.

Dave Bittner: Look at the 30.

Joe Carrigan: Oh, is that an O?

Dave Bittner: It's an O.

Joe Carrigan: Aha.

Dave Bittner: It's not a zero; yes. [Laughs]

Joe Carrigan: Interesting. No, I didn't even nothing that.

Dave Bittner: Yes; well see, that's how they get you. Feel stupid?

Joe Carrigan: No. [Laughter] [Overlapping] --

Dave Bittner: Better man than me. [Laughter] Skipping down to the receipt number. What's wrong with the word "receipt"?

Joe Carrigan: It is misspelled.

Dave Bittner: It is misspelled, yes. [Laughs] And then going further down, there's a little typo, "Please contact us at -- to cancel the plan."

Joe Carrigan: Right.

Dave Bittner: That's a giveaway. But then the phone number itself also --

Joe Carrigan: Oddly spaced.

Dave Bittner: Oddly spaced, so it's like --

Joe Carrigan: So those are probably Os instead of zeros.

Dave Bittner: Right, there's a couple of zeros in the phone number that are probably Os not zeros. And there's weird spacing so that the automation would have trouble figuring out -- the spam catcher would have trouble figuring out --

Joe Carrigan: Right.

Dave Bittner: That this was indeed a phone number.

Joe Carrigan: Yes.

Dave Bittner: So yes it's interesting. But you know, a tip of the hat to Michael for doing all the right things.

Joe Carrigan: In fact, the last three of the phone number, Dave, I just set it all to lowercase; it is not zero one zero, it is OIO. [Laughter] It's like Old McDonald has a phone number.

Dave Bittner: Exactly.

Joe Carrigan: OIOIO; OIOIO. That's great.

Dave Bittner: All right, well, Michael, thank you for sending this in, and again, tip of the hat to you for doing all the right things and taking a deep breath and slowing down --

Joe Carrigan: Yes.

Dave Bittner: And not falling for this.

Joe Carrigan: Yes.

Dave Bittner: But I think you're a reminder that this is when they get you, right, you're under stress --

Joe Carrigan: That's exactly right.

Dave Bittner: You're surrounded by chaos, and it's so easy to overlook this. It can happen --

Joe Carrigan: Very much like your story today.

Dave Bittner: To anybody.

Joe Carrigan: Yes.

Dave Bittner: Yes. So thank you for sending this in. Again, if you have something you'd like us to consider for our "Catch of the Day", you can email us. It's hackinghumans@n2k.com. [ Music ] Joe, I recently had the pleasure of speaking with Robert Blumofe. He is the executive vice president and chief technology officer at Akamai, certainly a well-known tech company. Here's our conversation.

Dr. Robert Blumofe: We're in the middle of what I think we'd have to call a "generative AI mania". And I think that the narrative is all over the map. I've oftentimes referred back to a quote from the science fiction writer and futurist, Arthur C. Clarke, who says that any sufficiently-advanced technology is indistinguishable for magic.

Dave Bittner: Mmm.

Dr. Robert Blumofe: And so I think we're in this weird phase right now where, you know, generative AI -- well AI more broadly is affecting pretty much all of our lives in rather dramatic ways, but to the vast majority of the population, it really is indistinguishable from magic. And that creates an opportunity to frame things in any number of ways, from, you know, utopian benefits to a future society to dystopian darkness that's going to overtake us all. And it's really hard to navigate the reality from the hype, both the good hype and the bad hype.

Dave Bittner: Yes, it's funny, you know, I think myself having grown up in the '80s say -- I hear people joke about how we came up expecting that maybe we'd get "Star Trek", but instead, maybe we got "Blade Runner".

Dr. Robert Blumofe: [Laughs] Yes, right? Yes, well, I think we all thought -- you know, those of us who grew up -- you know, I'm actually -- I was born in the '60s and, you know, that was the golden era of aviation and, you know, from space travel and, you know, to the Concorde, and the 747, and the SR-71. And then at least from an outsider's point of view, it seems to all just have come to a stop. You know, if you're in that industry, it's a whole different story, I'm sure, but to an outsider, you know, the planes of today look roughly like the planes of 50, 60 years ago; they travel about the same speed with the exception, of course, of the Concorde, which doesn't even exist anymore. But what we couldn't forecast, what I think we all were just completely surprised by was what's happened in telecommunications, just astounding, some of which has been obviously to just tremendous benefit, and yet, there is this dark side.

Dave Bittner: You know, I think folks talk about this idea of a cyber 9/11 or a cyber Pearl Harbor. We haven't seen that come to pass yet. When you hear people use those kinds of terms, that kind of breathless approach, what do you think about that?

Dr. Robert Blumofe: Well, you know, since, you know, we're talking about AI and generative AI, I think these ideas have been put forward with AI probably more than any other particular area of technology, you know, this idea -- because we've all seen movies, you know, AI takes over and attempts to kill us all. That's been the subject of multiple movies. Of course, we all know that the "Terminator" movies and I think there has been a lot of speculation and worry about these so-called "doomsday" scenarios, which I think they're -- personally I think we're far, far away from anything like that; very hypothetical. And you know, ironically, I think in many ways, it's giving -- you know, it's giving the current technology maybe more credit than it deserves. You know, because one of the things that we've learned as we all get experience with, say, large language models or other forms of generative AI, is that there are a number of things that it's really not good that. But one of those things is planning. These systems are not good at planning. They oftentimes, by the way, give the semblance of planning by oftentimes being able to produce an output that looks like a plan and maybe even is a good plan, but that's different than actual planning. And if these things are going to take over, well, they're going to need to do some planning. And I don't think that the current technology or anything that we're going to have in the near future has anything resembling the kind of planning capabilities that would be required for these sorts of doomsday "Terminator" scenarios. But the thing that really kind of -- that gets me is that a lot of the conversation that focuses on this doomsday scenario, which is so hypothetical and I think just at this point not really worth time, is taking away from the conversation that we need to be having, which is, you know, instead of talking about the doomsday scenario, which is hypothetical and probably not happening, what about the very, very bad day scenario, which is happening, and is going to happen with increasing frequency? And so I worry that the conversation becomes dominated by this sort of doomsday conversation and we're not spending enough attention talking about the very, very bad day scenario; because we're going to have a lot of very bad -- we're going to have a lot of bad days, thanks to AI.

Dave Bittner: Mmm. Well, what are some of the realistic perils that you see potentially affecting us here? What are your concerns?

Dr. Robert Blumofe: Well, you know, it stems from, you know, this idea of deepfakes, social engineering. You know, that's really, I think, at the core of probably what the biggest worry that I have is -- you know, I remember, by the way -- it was probably a couple of years ago maybe when I first started seeing this technology emerging, you first start to get experience playing with large language models and you start getting some experience playing with these image generators. And I remember that back then, probably the very first demos I ever saw almost all of them had a flavor of what I call "mimicry". It was, you know, you would ask the large language model, you'd say something like, "Please write a user manual in the style of the author, Tom Wolfe -- "

Dave Bittner: Mmm.

Dr. Robert Blumofe: Or something silly, you know, "Please generate a -- " you know, "an image of cats playing on the Moon in the style of Matisse." And the results are -- of course, they're just stunning, the results that you get from instructions like that. And I remember seeing these demos and thinking, "Why," you know, "why is it that these systems have the ability to do this kind of mimicry," because it's so dangerous. You immediately think about -- or at least I did, immediately think about the bad ways in which people could use this mimicry capability. You know, we've seen examples of that recently. You know, we saw, you know, headlines about the deepfakes Joe Biden phone calls, which is just the tip of the iceberg, or just maybe sort of the beginning of a lot more to come. But my point being, it just -- back then I was sort of perplexed; because, you know, I believe that the people who -- you know, the engineers, the scientists who created these tools, you know, they're not trying to do harm. They want to build good tools. They want to build things that are going to help us all and do good. So it does beg the question, "Well, why did they include this ability to do mimicry?" And it took a little bit of time for me to sort of get under the covers and understand more about how these things worked when I realize that it's kind of inherent in what these things are; the engineers who built these large language models and these stable diffusion models and whatnot, they didn't actually program in any notion of mimicry, but rather, it's simply inherent in what these things are, because remember, they're trained on, you know, all the text, say, on the internet or on the web, whatever, or trained on the various images that are available. So of course, there are image -- you know, images of Matisse paintings, and of course, there are writings from Tom Wolfe. So once the system's been trained on these things, of course, it has the ability to produce output in the style of Tom Wolfe or in the style of Matisse. So it's inherent in these -- in what the technology is; and in fact, the challenge actually is putting the guardrails back in to make it so that you can't misuse that capability. It's actually taking mimicry out is much harder than putting the mimicry in because it's just an inherent part of the way these models work.

Dave Bittner: Yes, I mean, it's a fascinating insight. I mean, my understanding it, and albeit incomplete, is that at their core these things are probability engines. And so, you know, that's a big part of how they end up, which is what people describe as an illusion of intelligence, or sentience, or however you want to describe it. Do you think it's realistic to expect that we can put meaningful guardrails on these systems?

Dr. Robert Blumofe: That is a great question. You know, and by the way, you make the point about ascribing some notion of sentience to these things. I think there's a natural tendency to want to ascribe some notion of intelligence to anything that can carry on a conversation. And I forget who I first heard say that. Somewhere along the line I read somebody wrote that or somebody said or somebody -- or I heard somebody say that. So I'm actually now mimicking somebody else. [Laughter] But I like that insight. It's just absolutely true, it's something that can carry on a conversation, you want to ascribe intelligence to it. But as we've seen, it's possible to carry on a conversation simply through this, as you said, probabilistic model, simply through this simple model that's basically, you know, looking at patterns of techs, and then calculating reasonable probabilities for, you know, how do you continue that text pattern, or in the case of images, you know, what are some reasonable probabilities for sort of if you go kind of next pixels or next patches of pixels. The question -- and yes, you're asking about -- well guardrails, it's hard. And from everything that I've seen -- you know, first of all, again, the recognition that the guardrails are kind of put in after the fact. The ability of these things to be used to do harm like mimicry, and deepfake, and whatnot, that's inheriting what the thing is. And so governing that and controlling that is done through, as you said, this mechanism of guardrails, which is sort of an after-the-fact mechanism. It's something that has to be put in after the fact. And I think what we're seeing is that it's remarkably hard to do. You know, we read about jailbreaks, people are publishing so-called "jailbreaks". And even without people intentionally going through jailbreaks and breaking through the guardrails, it happens very frequently accidentally. And we also see unintended consequences. You know, for example, you know, we saw this with, you know, Google Gemini where they basically have had to pull back because it was producing some really odd output. And all of that, as far as I understand it, was a consequence of good intentions. You know, the engineers at Google wanted to put in reasonable guardrails to make sure that their models were not perpetuating biases or doing other bad things. So the intention was very, very good, but the execution is so difficult that they ended up with unintended consequences. And so the net is that -- my sense is that guardrails are really, really hard, not only because they can be broken by people intentionally breaking them, but because they can have unintended consequences and oftentimes don't do what you want them to do.

Dave Bittner: I'm curious what you and your colleagues there at Akamai are seeing in terms of the adoption by cybercriminals, by scammers of these kinds of tools. Where does the focus seem to be?

Dr. Robert Blumofe: Well, so absolutely; the cybercriminals are absolutely adopting these tools. I actually think that, you know, if you're a cybercriminal, you could make the case that November 30 of 2022 is the greatest day of your life, because that was the day that open AI announced ChatGPT. And I'm not trying to throw ChatGPT in particular under the bus, but that was the day, the moment that set off generative AI mania. If you are a cybercriminal, your eyes lit up and you realize that there is now this whole new type of tool, that either is already available or will soon be available to you to do some really dramatic things. So and I think we're at the early stages of that. You know, I've oftentimes talked about sort of the large -- long arc of cybersecurity is I think, you know, for many, many years, maybe even decades, you know, if you say cybersecurity, you know, it goes all the way back to, say, the beginning of the web or even the beginning of the internet, maybe you'd want to pick out -- I think 1988 was the Robert Morris worm; maybe call that day one. So we've been dealing with cybersecurity for decades now. And for most of that history, aside from, say, nation-states, the primary actor that we were concerned with was the so-called "hacktivist", who was really just doing these things to make a point, to show off to friends. They would perpetrate these attacks sort of once in a while. And by today's standards, I will also say they were very unsophisticated. You know, so forward to today where I think if we look back, it's really just been a handful of years, two, three, four years now where the landscape changed and the dominant bad actor went from being a hacktivist to a criminal who's motivated by money. They're very organized, they have very sophisticated tools, and now it's no longer just perpetrate an attack once in a while, it's they're literally going on constantly. And the tools, you know, are effective, they're scalable, so they can go down-market, very sophisticated. And as soon as you put, you know, money as the motivation as opposed to, say, make a point to show off to your friends, that changes the game dramatically. And I think we've seen that over the last few years with the rise of ransomware, extortion attacks, things like that, that the level of malignancy, the level of potency of these attacks has gone up dramatically just over the last few years. And I think what we're going to see now is over the next few years, those same bad actors as good and -- good, they're bad, but as powerful as they are today, they're going to get considerable more powerful because I think the next few years are going to be dominated by them adopting these tools and focusing heavily on things like social engineering. [ Music ]

Dave Bittner: Joe, what do you think?

Joe Carrigan: I like how he starts off with Arthur C. Clarke.

Dave Bittner: Yes.

Joe Carrigan: My opinion of Arthur C. Clarke is that he was a techno profit, Dave.

Dave Bittner: Yes.

Joe Carrigan: The guy predicted so much stuff. And these LLMs are like magic to us.

Dave Bittner: Mmm.

Joe Carrigan: So it -- he's right about -- he was right about that, that if you don't understand the technology, people are going to think it's magic. I mean, I frequently fantasize about time travel.

Dave Bittner: Yes?

Joe Carrigan: I don't know why I waste my time doing this.

Dave Bittner: Okay. [Laughs]

Joe Carrigan: But I -- imagine going back to, you know, the Salem witch trials and pulling a phone out. [Laughs]

Dave Bittner: Right.

Joe Carrigan: They'd burn you at the stake for that --

Dave Bittner: Yes. [Laughs]

Joe Carrigan: Right? I mean, that's the kind of thing. Michael Crichton, also a great predictor of technology, one of -- actually more of a techno philosopher.

Dave Bittner: Mmm.

Joe Carrigan: And I think a lot of his -- one of his statements from "Jurassic Park" applies here, that you're so busy wondering if you can do something, you never really stop to think if you should do something.

Dave Bittner: Right.

Joe Carrigan: And we're all familiar with Jeff Goldblum's line from that or if you read the book --

Dave Bittner: Yes.

Joe Carrigan: His character's line. I never read the book, I only saw the movies --

Dave Bittner: Hmm.

Joe Carrigan: Because I like dinosaur movies. [Laughter] I've read other Michael Crichton books; yes. When we're looking at the future, it tends -- you know, we think utopian or dystopian, you know, "Star Trek" or "Blade Runner".

Dave Bittner: Yes.

Joe Carrigan: I'm leaning towards "Blade Runner", Dave --

Dave Bittner: Yes..

Joe Carrigan: Much more dystopian.

Dave Bittner: Certainly lately. [Laughs]

Joe Carrigan: Yes. Yes. The impact of the internet as a whole, I mean, not just everything that -- I mean just we never could have predicted the impact of the internet.

Dave Bittner: Yes.

Joe Carrigan: This has been huge.

Dave Bittner: Yes. I will say the thing that I never saw coming when I was -- because I was pretty active with computers and all -- as this stuff came about. And I know you were, too.

Joe Carrigan: Yes, I was.

Dave Bittner: I remember sitting there and, you know, in the days of dial-up modems, the thing that I did not predict, the thing that I could not envision was the wireless internet, was the --

Joe Carrigan: The Wi-Fi.

Dave Bittner: Well, Wi-Fi, you know, all of the -- our mobile devices --

Joe Carrigan: Oh, okay.

Dave Bittner: That there would be ubiquitous, wireless connectivity --

Joe Carrigan: Oh, right, yes.

Dave Bittner: Ubiquitous wireless data connectivity --

Joe Carrigan: Right.

Dave Bittner: Pretty much no matter where you are.

Joe Carrigan: Yes.

Dave Bittner: That I did not see coming. I thought you just have faster modems, right?

Joe Carrigan: Right. [Laughter]

Dave Bittner: Yes.

Joe Carrigan: Sixty-six hundred kilowatt data [overlapping] --

Dave Bittner: Yes, I mean, I just lacked the vision to see that that -- and in retrospect, it's obvious, but at the time, I just didn't think -- didn't have the vision to see that being the inevitable outcome.

Joe Carrigan: Yes, I don't know that I saw that. But as soon as it became available like with -- right when the iPhone came out --

Dave Bittner: Yes.

Joe Carrigan: I was like, "This is just going to get more widely available and cheaper over time."

Dave Bittner: Yes.

Joe Carrigan: And now everybody has unlimited data on their phone, okay?

Dave Bittner: Right.

Joe Carrigan: I even have a hotspot on my phone where I can use my phone as a Wi-Fi access point --

Dave Bittner: Sure.

Joe Carrigan: And work anywhere in the world.

Dave Bittner: Yes.

Joe Carrigan: You know, it's -- you know, as long as there's -- as long as I have a cell phone connection, I'm good.

Dave Bittner: Yes.

Joe Carrigan: There are some places where that doesn't work, but it works most places. Talking about the cyber Pearl Harbor event or maybe the Skynet event --

Dave Bittner: Yes.

Joe Carrigan: If we're going to make movie references all day, I'm going to do that. [Laughter] I agree that we are pretty far from that. We can always just power off these systems as well. That's one of the things. And now making a "Matrix" reference as well. [Laughter] The difference between, you know, Skynet and an LLM is at some point, somebody can walk up and turn off enough machines that ChatGPT stops working --

Dave Bittner: Yes.

Joe Carrigan: Right? These AI systems are not good at planning. I'm kind of relieved to hear that. [Laughter] Of course right now there's some AI researcher going, "Ah, that's an interesting research problem."

Dave Bittner: Sure. Yes.

Joe Carrigan: Soon they will be good at planning. Edit that part out. [Laughter] And then -- yes; then they will be good at planning, and then humans are history.

Dave Bittner: Yes.

Joe Carrigan: No, I don't think so. But I like what Roberts is talking about here. Worrying about the cyber Pearl Harbor event is not as important as worrying about the very bad day event --

Dave Bittner: Hmm.

Joe Carrigan: Which is a much more realistic problem that actually happens to companies and organizations within governments and NGOs all the time.

Dave Bittner: Yes.

Joe Carrigan: It -- you can't watch the news without hearing about other very -- some company having a very bad day.

Dave Bittner: Sure.

Joe Carrigan: I like the discussion about the wide mimicry. What is it that these things are doing that's mimicry? And that's because that's ontologically what these things are, they're mimics, these LLMs are just mimics.

Dave Bittner: Yes.

Joe Carrigan: You really can't take the mimicry out because if you take the out, you've taken away the ability of the model to perform. You're correct in these -- your statement that you made here, and that's that all these systems give the illusion of intelligence for sentients --

Dave Bittner: Right.

Joe Carrigan: That there's nothing there. There's no "there" there, if you will.

Dave Bittner: Yes. I wonder if that is ultimately -- will ultimately be a distinction without a difference.

Joe Carrigan: Yes, that's a good question.

Dave Bittner: Yes.

Joe Carrigan: And then there are the people who philosophize about are we even there?

Dave Bittner: Yes.

Joe Carrigan: You know, do -- our conscience is real. And that's where I check out, Dave. [Laughter] That's where I go, "Okay, now you're getting way too pedantic."

Dave Bittner: Sure.

Joe Carrigan: Albeit that's something that I'm just not going to go down that road with you. [Laughter] The best we can do is put guardrails in after the fact because of the nature of these things, but people jail breaking these guardrails and getting around the guardrails is going to happen.

Dave Bittner: Yes.

Joe Carrigan: Also, there are models out there with no guardrails you can just download.

Dave Bittner: Yes.

Joe Carrigan: If you have a powerful enough computer, you can run them.

Dave Bittner: Yes.

Joe Carrigan: There is a problem -- or the problem that Robert was talking about with Gemini and with all of these things, that -- all these LLMs, is that at the bottom of all of these models, there is the training data. And maybe that data has some biases in them, or perhaps the data is not biased, but the model inferences, we believe to be incorrect.

Dave Bittner: Yes.

Joe Carrigan: Right? Remember TEI?

Dave Bittner: Yes.

Joe Carrigan: That was --

Dave Bittner: I do.

Joe Carrigan: All biased data, right?

Dave Bittner: Microsoft's TEI, yes.

Joe Carrigan: Yes.

Dave Bittner: Yes.

Joe Carrigan: It lasted 16 hours on Twitter, Dave, [laughter] before it became a neo-Nazi and they had to shut it down.

Dave Bittner: Right. Yes.

Joe Carrigan: Hilarious, but -- and let's them replace with XO, right, which was criticized because it -- the -- it was -- in order to avoid controversy, they had to introduce biases.

Dave Bittner: Yes.

Joe Carrigan: So if -- even putting these guardrails on, those guardrails are a form of bias.

Dave Bittner: Yes. Like I say, the -- you know, the -- we want these things to behave in the way we aspire to be as humanity, and --

Joe Carrigan: That's an excellent observation.

Dave Bittner: They behave as we actually are.

Joe Carrigan: Right.

Dave Bittner: Right? [Laughs]

Joe Carrigan: Yes, exactly; exactly.

Dave Bittner: Yes.

Joe Carrigan: These things behave like teenagers on 4chan, right?

Dave Bittner: Yes.

Joe Carrigan: It's --

Dave Bittner: I mean, it's a distillation of --

Joe Carrigan: Yes.

Dave Bittner: Everything we are as humans, good and bad.

Joe Carrigan: Right, absolutely.

Dave Bittner: And to pretend otherwise, I think, is folly.

Joe Carrigan: I would agree with you 100%. I would agree with that 100%. These things are more of a mirror to humanity than we would like to admit.

Dave Bittner: Right; and it's uncomfortable.

Joe Carrigan: Yes. Yes, and we don't like who's in the mirror, do we Dave?

Dave Bittner: Yes. Yes.

Joe Carrigan: Cybercriminals are going to use these things like crazy. Robert is correct, we are only at the early stages of this. I am convinced there are people out there developing training LLMs to be malicious actors. They are going to get more powerful. It's not just going to be social engineering attacks; these LLMs are going to help in all stages of the attack, including the recognizance, the initial access, spreading throughout a network. These things are going to really help people with a low skill set become a much more higher skilled actor.

Dave Bittner: Yes.

Joe Carrigan: It's coming. So if you're not being prepared for that, then you should be being prepared for that.

Dave Bittner: Right.

Joe Carrigan: Zero trust in micro segmentation are very helpful for this, zero trust in that every time something is done, authentication is verified, an authorization is verified. The micro segmentation, I like that idea as well, just making sure that there -- that you -- if you're on a VLAN of one device, that's okay, right, you would almost never need to see your coworker's computer from a network standpoint --

Dave Bittner: Mmm; yes, yes.

Joe Carrigan: Right, because how are you -- how do I send the file to you, I'm going to email it to you. I'm going to send it across this chat application. Well, that's you going out to a server and then me going out to a server.

Dave Bittner: Right.

Joe Carrigan: There's no need for us to be able -- we don't have file-sharing -- we don't really use those file-sharing capabilities anymore, like the old Windows Share directory.

Dave Bittner: Right, right.

Joe Carrigan: We don't do that anymore. There are things like SharePoint --

Dave Bittner: Right.

Joe Carrigan: That make it possible to not have to do that.

Dave Bittner: Yes. All right, well, again, our thanks to Robert Blumofe for joining us. Again, he is the executive vice president and chief technology at Akamai. And we do appreciate him taking the time; really interesting conversation. [ Music ] That is our show. We want to thank all of you for listening. We want to thank the Johns Hopkins University Information Security Institute for their participation. You can learn more at isi.jhu.edu. A quick reminder that N2K's Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. This episode was produced by Liz Stokes. Our mixer is Elliott Peltzman. Our executive producers are Jennifer Eiben and Brandon Karpf. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Joe Carrigan: And I'm Joe Carrigan.

Dave Bittner: Thanks for listening. [ Music ]