Hacking Humans 6.7.18
Ep 2 | 6.7.18

A flood of misinformation and fake news.

Transcript

Dave Bittner: [00:00:00] Hey, everybody. Here's another preview episode of our new "Hacking Humans" podcast. We hope you'll check it out and enjoy it. You can find the "Hacking Humans" podcast on iTunes and all the other usual places you find podcasts. It's also available on our website, thecyberwire.com.

Stephan Lewandowsky: [00:00:15] We first have to recognize that we have a big problem on our hand. And the problem is that, at the moment, we're exposed to a flood of misinformation and fake news. And we need to do something about that because you can't run a democracy if people are believing in things that never happened.

Dave Bittner: [00:00:35] Hello, everyone. And welcome to the "Hacking Humans" podcast. I'm Dave Bittner from The CyberWire. Joining me once again is Joe Carrigan from the Johns Hopkins University Information Security Institute. Hello, Joe.

Joe Carrigan: [00:00:45] Hi, Dave. How are you?

Dave Bittner: [00:00:46] I'm doing great. And each week, The CyberWire's "Hacking Humans" podcast looks behind the social engineering scams, phishing schemes and criminal exploits that are making headlines and taking a heavy toll on organizations around the world. We've got some good stories to share this week. Later in the show, we'll have my interview with Stephan Lewandowsky. He's a professor of cognitive science at the University of Bristol - interesting gent. And before we dive into that, we've got a quick word from our sponsors - our friends at KnowBe4.

Dave Bittner: [00:01:17] So what's a con game? It's fraud that works by getting the victim to misplace their confidence in the con artist. In the world of security, we call confidence tricks social engineering. And as our sponsors at KnowBe4 can tell you, hacking the human is how organizations get compromised. What are some of the ways organizations are victimized by social engineering? We'll find out later in the show.

Dave Bittner: [00:01:46] And we are back. Joe, before we dive in with our stories here, a quick little bit of follow-up - last week, we talked about Google Chrome and the green lock icon...

Joe Carrigan: [00:01:56] Right.

Dave Bittner: [00:01:56] ...That Google Chrome displays when you are at a secure website. Well, it turns out Google is actually changing Chrome's behavior on how it's going to handle that green lock. Coming in September, that lock will no longer be green. And then eventually the lock's going to go away altogether.

Joe Carrigan: [00:02:13] Right. So this is just an interface change. It doesn't change any of the back-end functionality of HTTPS or the protocol or anything.

Dave Bittner: [00:02:20] Right.

Joe Carrigan: [00:02:20] What Google is saying here - 'cause Google is the maker of Chrome, or maybe it's Alphabet - I don't know. But it's part of Google - part of the Google universe.

Dave Bittner: [00:02:27] Yeah, it's a distinction without a difference.

Joe Carrigan: [00:02:29] Right. So what this is - it's an update to the user interface. Google has decided that HTTPS is so commonplace now that they're no longer going to tell you when your website connection is secured with HTTPS. They're going to tell you when it's not secure with HTTPS because that is more important. I think this is a good design decision because it's taking for granted that the connection should be secure. And if it isn't secure, then the user will be notified that the security portion is missing.

Dave Bittner: [00:02:59] Yeah, and they made the point that 83 percent of websites that are visited by people using Chrome on the Windows side are HTTPS.

Joe Carrigan: [00:03:07] Right.

Dave Bittner: [00:03:07] So that's the norm these days.

Joe Carrigan: [00:03:08] That's the majority - exactly. The vast majority of your traffic is going over HTTPS. So rather than telling you what the norm is, you should be notified when something is out of the norm. And I think this is a good - a good decision.

Dave Bittner: [00:03:19] All right, good stuff. All right, well, let's move on to our stories here this week. What do you have for us?

Joe Carrigan: [00:03:23] I have a story from The Wall Street Journal. On May 30, The Wall Street Journal had a special section on cybersecurity. And there was an article by Chris Kornelis called "The Anatomy of a Phishing Attack." And in the article, Chris talked to Shawn Moyer, from Atredis Partners, who broke down the process of a phishing attack into four phases. And the first phase was surveillance. Every hacking book that's worth its salt says this is the first phase in any attack. And phishing is no exception. If your company is going to be targeted for a phishing attack, there's going to be some surveillance done in the beginning of the attack.

Dave Bittner: [00:03:59] This differentiates it from just run-of-the-mill spamming.

Joe Carrigan: [00:04:02] Right, right. So broad-net phishing attacks - if I'm going to just, you know, send out a whole mess of attacks and try to get somebody to click on a site just because, you know, 1 to 3 percent of the people will fall for it, then I might not do the research into the company. But if I'm going to target a company, I'm definitely going to do the research into the company.

Dave Bittner: [00:04:19] Right.

Joe Carrigan: [00:04:19] And Moyer pointed out something very interesting. He said that new employees are especially vulnerable here because they're not familiar with the company's processes. They may think that all these emails from HR are reasonable, right? You know, I'm just a new employee. And I just got somebody sending me emails to run the - you know, click on this link to go sign up for my benefits. And it's a malicious link, right?

Dave Bittner: [00:04:42] Yeah.

Joe Carrigan: [00:04:42] So that's an interesting perspective that Moyer brings to this.

Dave Bittner: [00:04:45] Yeah, important to - I guess, in your onboarding process, to pay special attention to the vulnerability of your new employees to these sorts of social engineering attacks.

Joe Carrigan: [00:04:54] Right. And that's just part of the old adage that the security has to be part of the culture all across the organization. So the second phase he has is the attack. And this is where they send out the payload. And they send the typical email with the malicious attachment or the malicious link. But Moyer also talks about the possibility of a payload that no firewall can stop - sending somebody a physical DVD or CD with some malicious software on it and some instructions. And it looks - you know, looks legit. It's going to have company logo on it and everything. It's really cheap to print these things up...

Dave Bittner: [00:05:29] Right.

Joe Carrigan: [00:05:30] And get them to look like they should look.

Dave Bittner: [00:05:31] So it says, welcome to the company - or, here's our new compliance package or something like that.

Joe Carrigan: [00:05:36] Right. You're a new employee; please run this software - and something more believable than that. You know, somebody is going to spend more time than I am on it.

(LAUGHTER)

Dave Bittner: [00:05:43] Right.

Joe Carrigan: [00:05:45] And then he says that phase three is once inside. And once the people that have been targeted have used the software, clicked on the link or installed it - whatever - they're going to do what they can. And that's - they're going to spread through the network, or they're going to try to steal some data. They're going to try to lock some systems and encrypt the data for ransomware. They're going to see if they can persist. And they're going to see if they can get botnets or maybe mine cryptocurrency or something.

Joe Carrigan: [00:06:08] And then he says phase four is the payoff. And this is where these attackers get what they came after. Of course, if the data is what they came after - you know, this is like some kind of intellectual property theft - then once the data's exfiltrated, the attack is over. But that doesn't necessarily mean they're going to go away. They could persist and try to get more out of it.

Dave Bittner: [00:06:27] Right.

Joe Carrigan: [00:06:27] They may try to mine cryptocurrencies, in which case the payoff is literal and continuous, right? They're just going to always be getting small payments based on your consumption of electricity and your processing power. They may try to collect a ransom. And they may use the target organization's computers for other malicious activities, like botnets or spamming.

Dave Bittner: [00:06:48] Yeah, it really speaks to that insider threat that we always talk about - you know, particularly what you're saying about sending someone a physical item - a CD, a DVD or maybe a flash drive...

Joe Carrigan: [00:06:58] A flash drive would be another great example.

Dave Bittner: [00:06:59] ...Yeah, that says, good news, here's the software that you need to - whatever - you know, access your benefits package or something like that.

Joe Carrigan: [00:07:07] Right.

Dave Bittner: [00:07:08] And then people fall for it.

Joe Carrigan: [00:07:09] And they do. And the reason they fall for it is not because - again, not because they're being stupid - but just because this makes perfect logical sense, for me to receive this at this point in time.

Dave Bittner: [00:07:19] Right. So it's important for your HR team - your onboarding team to set expectations with the people who are joining your organization.

Joe Carrigan: [00:07:29] I would say...

Dave Bittner: [00:07:29] Here's what you can expect. We will never do this.

Joe Carrigan: [00:07:32] Right.

Dave Bittner: [00:07:32] We will never ask you for your password, you know...

Joe Carrigan: [00:07:35] Right.

Dave Bittner: [00:07:35] ...So that you're planting the seeds of what's normal.

Joe Carrigan: [00:07:39] Yeah, a security briefing should be part of day-one orientation.

Dave Bittner: [00:07:43] Sure, sure. It's interesting - interesting to see The Wall Street Journal covering this and, you know, good words of wisdom.

Joe Carrigan: [00:07:50] Yep.

Dave Bittner: [00:07:50] All right, so my story this week has to do with the notion of pretexting and also dealing with HR, which is, of course, a common way to get into organizations. And pretexting is really just the act of creating some sort of invented scenario to persuade someone, a victim, to release information or to do something.

Joe Carrigan: [00:08:12] Right.

Dave Bittner: [00:08:12] That's the pretext, right? But it's sort of - we're tying into what you were saying - that it comes down to doing research, you know, figuring out what's going to work. And they target HR because HR has a lot of valuable information. And HR is one of the departments in an organization that typically has to open a lot of attachments or has to click on links because they're getting all sorts of random things from employees, from providers, from health care providers, from compliance organizations and all that sort of stuff.

Joe Carrigan: [00:08:43] Right. It's a pretty bad combination because they're in charge of a lot of very important and sensitive data that's valuable.

Dave Bittner: [00:08:49] Right. So the bad guys have figured out that pretexting works. It's a good social engineering technique. In fact, Verizon, in their data report that they come out with every year, they found that pretexting incidents rose by over 400 percent over the last year.

Joe Carrigan: [00:09:05] The attackers are getting better.

Dave Bittner: [00:09:06] Yes, the attackers are getting better. But how much are they recognizing that this still works? In other words, how much are the technical defenses getting better?

Joe Carrigan: [00:09:15] Right.

Dave Bittner: [00:09:16] So that in order to get in, you need to take advantage of the human element - where perhaps there were technical ways to get in, and as those defenses get more sophisticated...

Joe Carrigan: [00:09:26] Right. I think you're exactly on point here - that this is exactly what they have to do. The market is driving them - if you think of this as a market, right?

Dave Bittner: [00:09:35] Yeah.

Joe Carrigan: [00:09:35] The economic forces are saying, now the least costly way to go through is to try to hack a human.

Dave Bittner: [00:09:41] Right. So what can you do? Obviously, you know, we talk about training being a big part of this - and setting expectations, having people be on the lookout. There are some technical solutions to this sort of thing. There are companies who offer - I guess you'd call them virtual browsers, where if you're someone who has to open a lot of links, you can actually open those links sort of remotely, within a - I sort of think of the bomb squad, you know...

Joe Carrigan: [00:10:06] Right, right.

Dave Bittner: [00:10:06] ...Where, you know, they come in. And they put something over top of the bomb and remotely detonate it.

Joe Carrigan: [00:10:12] Yeah.

Dave Bittner: [00:10:12] You know, so you can remotely detonate your links and things like that.

Joe Carrigan: [00:10:16] That's exactly right. There are virtual environments that allow you to kind of explore these websites. And then if the virtual environment gets infected, you just destroy it. It's gone.

Dave Bittner: [00:10:25] Yeah, and some of them are cloud-based too. So you know, it's an interesting way of protection. I think if I were someone who had to deal with this, or I was in charge of trying to protect an HR department, that would certainly be something worth exploring at the very least.

Joe Carrigan: [00:10:39] Yes, and I might have them use computers like Chromebooks if they can get away with it. Those are, you know, a lot harder - they're more managed.

Dave Bittner: [00:10:47] Right.

Joe Carrigan: [00:10:48] A lot harder to - you know, they're more continually updated, a lot harder to hack into.

Dave Bittner: [00:10:52] Yeah, now, it's interesting. So you know, as HR has a bull's eye on their back...

Joe Carrigan: [00:10:57] Yeah, they do.

Dave Bittner: [00:10:57] ...Because of all the things they deal with - so you know, interesting story and a good cautionary tale there.

Dave Bittner: [00:11:05] All right, moving on - it's time for our Catch of the Day.

(SOUNDBITE OF REELING IN FISHING LINE)

Dave Bittner: [00:11:11] So this came to us from a listener who also happens to be a realtor. And this person got an introductory email that said they were looking for some help. They're moving into town, and they needed help buying a home. And so this realtor sent some information - just telling them a little more about themselves. And then they got this reply back - this email back from them. I'm going to read it to you now. So here we go.

Dave Bittner: [00:11:33] It said, (reading) hello, sorry I'm only replying to you now. You must have tried calling. I had a surgery last weekend which impeded my speech temporarily. Hence, I can't take any calls. Nonetheless, I still aim to move by May as my current home sale closes in May. I will be buying cash, so hopefully that can speed up the process since I will have no contingencies. My budget will not be more than $800,000. Please advise what type of homes I can get, so we can place our offer immediately. I have also attached my personal information, closing statements showing proceeds from the sale of my home and bank statement showing available funds shared securely using DocuSign. I believe this would help facilitate your search. I look forward to some positive news on your findings. So what do we make of this?

Joe Carrigan: [00:12:21] There's a couple of red flags in here already for me. One is that it's looking for a very quick call to action...

Dave Bittner: [00:12:27] Right.

Joe Carrigan: [00:12:27] ...Right? That's looking to get you to do something because they're trying to short-circuit your thinking process.

Dave Bittner: [00:12:33] Right, time's a-wastin'.

Joe Carrigan: [00:12:34] Time's a-wasting, right, and creating a sense of urgency.

Dave Bittner: [00:12:37] Right.

Joe Carrigan: [00:12:37] Number two, they just had, quote, "a surgery," which doesn't seem like standard English; it's little bit broken. That surgery has caused them to lose their voice.

Dave Bittner: [00:12:47] Right.

Joe Carrigan: [00:12:48] I don't know about you, but if I was in a - in that kind of a situation, where I had surgery and wasn't able to talk, I don't know that I'd be trying to negotiate real estate deals...

Dave Bittner: [00:12:56] Right (laughter). Yes.

Joe Carrigan: [00:12:56] ...At that point in my life.

Dave Bittner: [00:12:57] I might be taking a break for a couple of weeks.

Joe Carrigan: [00:12:59] Yeah, I might be just taking it easy and relaxing.

Dave Bittner: [00:13:01] Right.

Joe Carrigan: [00:13:01] It is interesting that a surgery would stop me from communicating with you vocally.

Dave Bittner: [00:13:05] Right.

Joe Carrigan: [00:13:05] Right? And that to me is also another red flag. So there's a couple red flags in here. What I think is interesting - even if you think this is a scam, this might still work on you. And here's how. If you think this is a scam, you'll be looking for the payoff of maybe this person is trying to bilk me out of some money here, so maybe I want to take a look at these documents.

Dave Bittner: [00:13:26] Right.

Joe Carrigan: [00:13:26] Right? But the documents in this case were the items that were malicious, correct?

Dave Bittner: [00:13:31] Correct. What they're claiming is the DocuSign document is actually the payload.

Joe Carrigan: [00:13:36] Is the payload for the message.

Dave Bittner: [00:13:37] And just to be clear, I mean, DocuSign is a legitimate way...

Joe Carrigan: [00:13:40] Right.

Dave Bittner: [00:13:40] ...That folks can share items like this securely online.

Joe Carrigan: [00:13:44] Not only does this have the opportunity to work on the level of, yes, just open these documents and take a look at this HUD-1, which is a real estate document for a closing of a house...

Dave Bittner: [00:13:54] Right.

Joe Carrigan: [00:13:54] ...And then you've got the malicious payload, but maybe from a different perspective, like, hey, this might be fishy. This guy might be trying to get access to my accounts. Well, let's see what he's saying he has here. And then - so there's at least two levels of thought process where I can see this going to still infect somebody even though they're suspicious.

Dave Bittner: [00:14:12] Right. So we've got a call to action. We want - someone wants to move quickly.

Joe Carrigan: [00:14:16] Right.

Dave Bittner: [00:14:16] And of course, the realtor wants to provide good customer service.

Joe Carrigan: [00:14:19] Yes.

Dave Bittner: [00:14:19] They have the potential here to make a sizable sale. An $800,000 home is nothing to sneeze at.

Joe Carrigan: [00:14:25] That's right.

Dave Bittner: [00:14:26] And so I think it's plausible that folks could fall for this. It's also fairly targeted. And realtors, I think, it's - that's an easy group to be able to pull their information off of online, publicly available sources.

Joe Carrigan: [00:14:37] They publicize themselves because they have to, because that's the marketing they need to do to get business.

Dave Bittner: [00:14:44] All right, well, it's certainly a cautionary tale. This is - to me it was an interesting example of rather than just a broad shotgun approach, that these folks actually targeted a specific line of business...

Joe Carrigan: [00:14:55] Right.

Dave Bittner: [00:14:55] ...And crafted their message to hit those folks.

Joe Carrigan: [00:14:58] And they probably did send out a broad shotgun. Each little ball of shot was only going towards a realtor.

Dave Bittner: [00:15:02] Yeah. All right, it's a good one.

Joe Carrigan: [00:15:04] It is. I think this is a very well-crafted phishing email.

Dave Bittner: [00:15:07] Yeah. And fortunately, in this case, this person did not fall for it. He was suspicious, and he sent it to his IT people, who had a look at it. And sure enough, they found the malicious payload within that attached document. So...

Joe Carrigan: [00:15:19] Very good.

Dave Bittner: [00:15:19] ...All's well that ends well. But it's one to look out for.

Joe Carrigan: [00:15:22] All right.

Dave Bittner: [00:15:25] And now we return to our sponsor's question about forms of social engineering. KnowBe4 will tell you that where there's human contact, there can be con games. It's important to build the kind of security culture in which your employees are enabled to make smart security decisions. To do that, they need to recognize phishing emails, of course. But they also need to understand that they can be hooked by voice calls - this is known as vishing - or by SMS texts, which people call smishing. See how your security culture stacks up against KnowBe4's free test. Get it at knowbe4.com/phishtest. That's knowbe4.com/phishtest.

Dave Bittner: [00:16:13] Joe, earlier in the week I spoke with Stephan Lewandowsky. He's a professor of cognitive science at the University of Bristol. He studies misinformation and the effects of propaganda...

Joe Carrigan: [00:16:23] Ah.

Dave Bittner: [00:16:24] ...Which of course makes him the perfect guest for this show (laughter).

Joe Carrigan: [00:16:27] Yes, it does.

Dave Bittner: [00:16:28] So here's my interview with Stephan Lewandowsky.

Stephan Lewandowsky: [00:16:31] We first have to recognize that we have a big problem on our hand. And the problem is that at the moment, we're exposed to a flood of misinformation and fake news. And we need to do something about that because you can't run a democracy if people are believing in things that never happened.

Dave Bittner: [00:16:50] Can you walk us through the evolution of this? I sort of saw it coming, but it seems like we've reached a new level - as you describe, fake news - higher than what we've ever had before.

Stephan Lewandowsky: [00:17:00] Yeah, I think that's true. And there's a number of reasons for that. I think first of all, it's the increasing use of social media. So more people are hooked up to social media than ever before. And that continues to increase dramatically - and connected to that, the ability of political operatives to exploit that connectivity and to bombard us with messages that are sometimes of very dubious provenance and do not speak the truth.

Dave Bittner: [00:17:30] Can you take us through some of the science behind that? Why are people so susceptible to these things?

Stephan Lewandowsky: [00:17:36] That's sort of the million-dollar question, isn't it? And there's a number of factors. First of all, the most important factor to my mind is to consider the fact that as we evolved, as we were living, you know, in caves or in small tribes thousands of years ago, we learned that we have to trust the people around us and that pretty much all the time, everybody was telling us the truth.

Stephan Lewandowsky: [00:18:02] Even today, that is still the case. We teach our children not to lie. We typically do not lie to each other in a family context. We do not lie to people on the street when they ask us what time it is. So there is a very good reason for us as human beings to believe what other people are telling us. And that is our default assumption. All of us go through life believing the things that people tell us.

Stephan Lewandowsky: [00:18:29] The problem, obviously, with that is that it breaks down if we're exposed to sources that are not telling us the truth. Unfortunately, it's always been the case in politics, but it's now reached a new level with a multitude of actors out on social media who are disseminating messages that are politically motivated but have no basis in fact. What is different now is the ability to target individuals to a degree of specificity that, until recently, was unheard of.

Stephan Lewandowsky: [00:18:57] Let me give you an example. There is fairly good evidence to suggest that if I have access to about 300 of your Facebook likes, that I can then infer your personality with greater accuracy than your own spouse. So if I know what you do on Facebook, then I know your personality. And of course the Facebook business model is based on selling information about yourself. If you use Facebook, you're the product basically. They are marketing you to advertisers. Once I know your personality, what I can do is I can exploit your unique vulnerabilities because I can figure out what messages people of a certain personality type are susceptible to. And there is research to, you know, explain how that is done and how we can do that.

Stephan Lewandowsky: [00:19:47] What then happens next is that I can target individuals on social media, based on their personality, knowing that the message I'm sending them is exploiting a unique vulnerability that enables me maybe to change your behavior. And if I do that on a large scale to millions of people, then for example, I might discourage people to go and vote if they're likely to vote for the other candidate. And then through that, I can have an effect on elections that 10 or 20 years ago would have been unthinkable because we wouldn't have been able to target individuals in that manner.

Dave Bittner: [00:20:25] Again, using the parallel of advertising, we have policy. We have laws preventing advertisers from falsely advertising things. Is this a case where policy is lagging behind? This fake news, this false information is allowed to be distributed without any ramifications?

Stephan Lewandowsky: [00:20:42] Well, I think definitely we do have a vacuum of a lot of things, and regulation and law is probably one of those. But I think it goes even deeper than that because let's consider the implications of what I've just said. What it basically means is that if I'm running a political operation, a campaign for whatever, a candidate or a referendum, what I can do is I can target the population with messages that are customized to exploit them.

Stephan Lewandowsky: [00:21:10] Neither the recipients nor my political opponents will ever know that. The recipients don't know that they're being manipulated. My political opponent doesn't even know what I'm saying to people out there because it is all customized, potentially. And it is taking place in darkness. By that I mean that there is no public debate. There is just a private attempt to manipulate people. Now, to my mind, that is completely undermining the very idea of a democracy because the idea of a democracy is to have a marketplace of ideas, a public space in which politicians and the public are having an ongoing conversation and they battle out different visions for the future and how to deal with problems.

Stephan Lewandowsky: [00:21:56] Now, that can only happen if everybody knows what everybody else is saying so you can respond to that. But if I'm on social media and I'm manipulating people with messages that no one other than them ever sees, then my opponent has no hope in hell of debating that. And that I think is, you know, a largely overlooked but, to me, most important problem about this customized political advertising on Facebook and elsewhere.

Dave Bittner: [00:22:25] And do you have in mind any potential solutions to it?

Stephan Lewandowsky: [00:22:28] Well, I think we have to entertain the possibility of regulating the process by which information is disseminated. I'm totally not in favor of any kind of censorship or regulation of content, but I think it is quite possible for us, in a democracy, to regulate the process by which information is disseminated. And so for example, what we have right now is that we don't know what political operatives are saying to people on Facebook. We have absolutely no idea because the messages are not public.

Stephan Lewandowsky: [00:23:03] So an easy and obvious solution, a first step, is to require that anybody who's running a political campaign, that they make all their messages available to the public in a public repository, where, you know, the public and political opponents can go, and they can find out what is being said. And then they have an opportunity to engage with those messages and to actually have a democratic debate about them.

Dave Bittner: [00:23:29] Now, what does the science say about the ability to equip people to protect themselves against this sort of thing? Can you educate people to be skeptical so that they can have a sense for when someone's trying to manipulate them?

Stephan Lewandowsky: [00:23:42] Well, that's a very good question. And that is one of the lines of research I've been pursuing with colleagues over the last few years. And the answer is somewhat encouraging, at least. We've run a number of experiments now where we have exposed people to a historical misinformation campaign that was done, for example, by the tobacco industry in the 1950s and '60s. And we pointed out to people in our studies, look here, this is what the tobacco industry used to do and things like, you know, creating fake experts who then claim that smoking is, in fact, not bad for you despite all the medical evidence to the contrary. And we explained how the tobacco industry did this and what the consequences were of that disinformation campaign and so on.

Stephan Lewandowsky: [00:24:31] Once you've done that to people, if you then expose them to misinformation about a completely different issue - in our experiments, it was about climate change - if you then present people with misinformation about climate change that's using the same rhetorical technique - namely, the use of fake experts - then people are seemingly inoculated against that misinformation because they spot the overlap with what they've just been told about how people in the 1950s and '60s were misinformed by the tobacco industry.

Dave Bittner: [00:25:07] Does your research indicate that it has to do with the - that inoculation being something that they don't really have any skin in the game with, you know, something from the past so they don't have an emotional component to choosing a side that they're going to take?

Stephan Lewandowsky: [00:25:22] Well, that's a very good point. Certainly during the inoculation phase, when you're informing people about techniques, you probably do not want to use something that is current and that is sort of emotionally arousing in the moment. So you're absolutely right. You're better off teaching people on something that, in retrospect, anybody can agree on or that is, you know, completely historical, and no one cares about it anymore.

Stephan Lewandowsky: [00:25:46] But having done that, what our experiment showed is that you can then expose people to contemporary issues that are, in fact, hotly contested, such as climate change. And one of the remarkable findings in our study was that the inoculation was particularly effective for political conservatives, even though political conservatives normally - that is, without inoculation - are most predisposed to be skeptical about climate change and its human causes. By telling them, hey, you've been misled in the past, they seemed to become very alert toward similar attempts to mislead them now and in fact said, whoa, whoa, whoa. (Laughter) No, I don't like being misled.

Dave Bittner: [00:26:34] Interesting. Now, what is your advice to people who are out there trying to protect themselves against everyday social engineering - you know, phishing attempts, ways that people are trying to influence them politically? Do you have any broad advice for how people can go about their day-to-day lives and do a better job protecting themselves?

Stephan Lewandowsky: [00:26:53] Well, (laughter) one easy way is to get off Facebook.

Dave Bittner: [00:26:56] (Laughter).

Stephan Lewandowsky: [00:26:57] If you feel that you can do that without compromising your quality of life, then, you know, if you don't have much to lose, why not? Because that is a place where you will be exposed to a lot of dubious information. But even if you don't take that radical step, there are ways of making sure that you examine the trustworthiness of a source, or, at the very least, you're reading what it is before you're sharing it or forwarding it. Because part of the problem we're having now is that people read a headline, and then they click on it to forward it to all their friends. And the problem with that is that (laughter) if you don't read what it is that you're forwarding, then you're just becoming part of the problem. You're not part of the solution. So at the very least, read things before you forward them.

Dave Bittner: [00:27:41] Invest that time as a courtesy to your friends and family, right?

Stephan Lewandowsky: [00:27:45] Exactly.

Dave Bittner: [00:27:46] All right. Well, Stephan, again, thanks for taking the time for us.

Stephan Lewandowsky: [00:27:48] You bet.

Dave Bittner: [00:27:50] So interesting gentlemen, eh?

Joe Carrigan: [00:27:52] Yes. That's an excellent interview. I'm going to make that interview mandatory for all my Facebook friends (laughter).

Dave Bittner: [00:27:58] (Laughter) What were some of the takeaways for you from that?

Joe Carrigan: [00:28:00] The first thing I was absolutely astounded by is that 300 likes are enough to categorize you more accurately than your spouse can categorize you. That's remarkable.

Dave Bittner: [00:28:09] Yeah.

Joe Carrigan: [00:28:10] And then the customized political advertising that is happening outside of the open forum. And I like the idea of his policy of full disclosure for campaigns, that they have to list all their ads that they run and to whom they're targeted.

Dave Bittner: [00:28:22] Right.

Joe Carrigan: [00:28:22] But here's my question about that. How are you going to get these policies enacted by the people they directly impact? I just don't think that politicians are going to be disposed to enacting these policies, right?

Dave Bittner: [00:28:34] Right.

Joe Carrigan: [00:28:34] You know, because it's going to directly impact their ability to campaign. And I'm just not hopeful that, here in the States at least, we can get something like that through. But for the time being - in fact, probably forever - my personal policy is I don't get my political news from Facebook or any social media. If I see something going across my Facebook feed, I immediately go up to the little icon on the right. And I say, block these kind of posts. I don't want to see your political opinion on Facebook because Facebook is a terrible environment for political discussion.

Dave Bittner: [00:29:06] Yeah. Interesting.

Dave Bittner: [00:29:09] Well, thanks to all of you for listening. And thanks to KnowBe4 for sponsoring our podcast. They are the social engineering experts and the pioneers of new-school security awareness training. Be sure to take advantage of their free phishing test, which you can order up at knowbe4.com/phishtest. That's knowbe4.com/phishtest. Thanks to the Johns Hopkins University Information Security Institute for their participation. You can learn more about them at isi.jhu.edu.

Dave Bittner: [00:29:42] The "Hacking Humans" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technology. Our coordinating producer is Jennifer Eiben. Editor is John Petrik. Technical editor is Chris Russell. Executive editor is Peter Kilpe. I'm Dave Bittner.

Joe Carrigan: [00:29:58] And I'm Joe Carrigan.

Dave Bittner: [00:29:59] Thanks for listening.