Security Unlocked 1.5.22
Ep 55 | 1.5.22

Disinformation in the Enterprise

Transcript

Nic Fillingham: Hello. And welcome to "Security Unlocked," a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security, engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science. 

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security. 

Natalia Godyla: And now let's unlock the pod. 

Nic Fillingham: Hello, the internet. Hello, listeners. Welcome to Episode 56 of the "Security Unlocked" podcast. My name is Nic Fillingham. I'm joined, as always, by Natalia Godyla. Natalia, welcome to you. You have some exciting news. What is the latest in the Natalia Godyla cinematic universe? 

Natalia Godyla: (Laughter) Well, as some of you might know if you've been longtime listeners to the podcast - shout out to those longtime listeners - I'm engaged to be married. But with it being the time of COVID, it had been really difficult to figure out a plan. But we have officially set a date for a very small, intimate wedding. So that'll be what I'm thinking about for the next, you know, four months or so. 

Nic Fillingham: (Imitating klaxon honking). 

(LAUGHTER) 

Nic Fillingham: How do I do the sound of, like, klaxons celebrating your news? I don't know what that sounded like. 

Natalia Godyla: We don't even need audio clips. We just have you (laughter). 

Nic Fillingham: Did that work? I don't know if that worked. Let's keep - you should keep that in. 

Natalia Godyla: That works surprisingly well. 

Nic Fillingham: Congratulations, Natalia. And it's just great that you and your fiancee have found a way to move forward with your life in the age of COVID. Speaking of - no, it's not going to work. I'm just going to have a hard left turn into today's guest. Today's guest is returning champion Irfan Mirza, who was on the podcast first in Episode 15 to talk about enterprise resiliency as part of the first-ever Microsoft Digital Defense Report in 2020. Irfan is returning to talk to us about content found in this year's Microsoft Digital Defense Report. But this is a new topic - disinformation in the enterprise. You know, I came into this with an assumption, an expectation of what is disinformation. But what Irfan really sort of walked us through was, it's actually something that security people need to think about as a new sort of threat type in the enterprise. Natalia, what were some of your takeaways? 

Natalia Godyla: Irfan had a really great way of describing the differences between misinformation, disinformation and malinformation. I think he also did a really good job at contextualizing the conversation for security teams because, like you, I came to this conversation with a different understanding of disinformation and, really, lacked clarity on how this mattered to our world. I think he did a really great job describing some of the ways that you can think about information, and even brings into the conversation concepts like cognitive hacking, which is just so cool - the idea that you can, you know, penetrate someone's thoughts and impact those thoughts, and that ultimately impacts your company. 

Nic Fillingham: On with the pod? 

Natalia Godyla: On with the pod. 

Nic Fillingham: Welcome back to the "Security Unlocked" podcast, Irfan Mirza. Thank you so much for joining us once again. How are you doing? 

Irfan Mirza: I'm great. Thank you, Nic. Great to be here. Great to be back. 

Nic Fillingham: Regular listeners, longtime listeners, will recognize you from Episode 15, where we talked about enterprise resiliency, which is, of course, the breakfast of champions. That episode was - we were having you on to specifically talk about some content and some insights and observations that were in the very first Microsoft Digital Defense Report from 2020. You're coming back almost 12 months later, maybe - no, not quite 12 months later. But you're coming back for the 2021 Microsoft Digital Defense Report. And today, we're going to talk about one of the topics that you contributed heavily to and is, really, one of the hottest topics at the moment, which is disinformation in the enterprise. Before we jump into that, Irfan, if you could just re-introduce yourselves to our listeners, that would be fantastic. Tell us who you are. What do you do at Microsoft? What's your day-to-day look like and, yeah, whatever else tickles your fancy? 

Irfan Mirza: OK. Well, great. Thank you so much, Nic. Yeah. I'm Irfan Mirza. I run the enterprise resilience program at Microsoft. This is a program where we look at disastrous events and outages and incidents and make sure that the enterprise is able to withstand those. Our goal is to really make sure that we can continue to provide the high availability of services - our business processes, our supply chain, suppliers, our people - through whatever nature seems to be throwing at us, through this pandemic. You know, we've been very cognizant of making sure that, you know, our people aren't burnt out and that they're getting the sort of breaks that they need in between to be able to manage the work and the situations that are happening in their own personal lives. So the space is quite broad. The mission is quite important. So I'm really happy that that I'm here and I'm able to serve in that capacity. 

Natalia Godyla: And today, you're coming to talk about disinformation, as Nic mentioned. So it is a new topic - more so a new topic in that it has, you know, resurgence in conversation around it. But it is one that has happened before, has happened across history. So - you know, let's just context that a little bit. What is disinformation? And maybe more specifically, how does disinformation differ from malinformation, misinformation, some of these other terms that our audience might have heard? 

Irfan Mirza: These are important questions. Look, the idea of disinformation has been around for a long time, as has misinformation. You know, we can go back all the way to the time of Galileo and Copernicus and dealing with a world of misinformation and having to sort of reconcile the differences between what their science was showing them and what people had believed; and, you know, even prior to that. It's not a new idea. 

Irfan Mirza: But I think the - where we are going now with disinformation really has to do with taking, like, as you said, malformed information or mal information and using it to manipulate an outcome or to do harm. I think the intent of it is different. You know, the intent of misinformation is not really there to, like, manipulate the situation, but rather it's what little I know and I'm ready to share it with you. It could be tribal knowledge. It could be things that we've - we have misinterpreted or understood. Disinformation is more deliberate. Its goal is to erode confidence, either to manipulate an outcome - like I said, like a market or, you know, share value - disrupt a business perhaps or some sort of a trend that we're seeing in society. The differentiation has to do with not just what the goals are, but also the intent behind them. 

Nic Fillingham: If I'm - just take us behind the scenes a little bit. How did this chapter and then your - the section that we're going to talk about more sort of specifically here about disinformation in the enterprise - how did that come about to be - to sort of elevate in sort of priority to be a part of the 2021 MDDR? What happened behind the scenes, both with your work and then just sort of at a macro scale with - inside Microsoft? When we're looking at all the observations that we have in the security space, how did disinformation become its own chapter and then disinformation in the enterprise as sort of a more tactical way of disinformation sort of finding its way into our sort of collective conversation? 

Irfan Mirza: Yeah. I think I'm hearing two questions here, sort of. One is a little bit about... 

Nic Fillingham: What about nine? I just blended them all together into a word salad. 

Irfan Mirza: Sounds like... 

Nic Fillingham: Please answer the question that you would like to. 

Irfan Mirza: How the MDDR comes about - let's start there first. This is a large number of experts in the company, and, you know, they're all sort of tapped on the shoulder or they offer up ideas and information. And it's a relatively free-form brainstorming that happens at the very beginning of this process. And we bring our collective ideas or our individual ideas. And then, you know, the editors have to sit down, and they have the hard task of saying, well, let's make sure we have a theme. What's important? What's important for our customers? What's important in the discipline or in the space itself - security? And what's important to society? We have an obligation, perhaps as our citizenry, to make sure that we're also addressing that. 

Irfan Mirza: And so as a result of that, you know, one of the topics that came up was around disinformation. And we said, well, how much do we want to cover and what do we want to do about it? As it turned out, we've been doing a lot of thinking, and a large number of experts in the company have been doing a fair amount of thinking around disinformation. So we were able to pull that together. 

Irfan Mirza: Initially, I have to say, there was a lot of content. The report would have probably been, I want to say, 20% larger than it is now because we just had so much good material to sift through. But one of the things that I - when I reviewed it, sort of all of it together, including the content I had submitted, I noted that there were some things that were missing. And one in particular was the so what to enterprise, meaning that, yeah, we know all of this is happening and it's affecting society and we think elections are being affected by it and social, you know, opinions and decision-making, but is there a concern for enterprises? And of course, the resounding answer was yes, there is. We need to make sure we call that out. 

Irfan Mirza: And so that's kind of how the chapter took shape, is through this very, I want to say, process of critical review that we all provided to it. And then we got a chance to, you know, evolve it and refine it, obviously, through the editing and reviews that were done. So that's how it came about. 

Irfan Mirza: But to your second question around why the enterprise perspective, I think there are some principles that are really important. What happens in society, ultimately, in some way ends up back in the enterprise. You may think that, you know, the coronavirus and the pandemic didn't start necessarily in the workplace, it just started as something. But it ended up affecting enterprise in a pretty significant way. Supply chains are now, you know, bottlenecked, and manufacturing slowed down for a while. And so there's the domino effect of that. Everything pretty much that happens in society, some degree has an effect on enterprise, either shaping the opportunities that enterprise uses or leverages or putting constraints on it in some way that we have to think about workarounds or how we overcome it. 

Irfan Mirza: So there is this close tie-in, I want to say, between our social experience and what we bring into the workplace and then also what the workplace has to offer back in terms of being able to provide security guidance, being able to provide direction and, you know, tools, specific security implementations. I think there's a lot that the enterprise has a footprint or a hold of that it needs to support. So that's how these two things came together. 

Natalia Godyla: So you said a couple of things there that I'd love to hinge on. So you've said that, you know, disinformation is not new, and in fact, it's been - across history, there have been a number of examples. And the second thing you said is that there is a new demand for this type of conversation. So, you know, what is amplifying disinformation right now? Why is this a threat that we're talking about more and more? Are there certain circumstances at play here, external or societal factors, technologies that are forcing it into the, you know, forefront? 

Irfan Mirza: Yeah, these are very good questions, Natalia. I think the way I think about it is, why has this become so important now? - and, you know, as opposed to years, years ago. The means of disseminating disinformation have grown, right? In ancient times, prehistoric times, perhaps we didn't have the ability to sort of sow disinformation or disseminate it, propagate it in the way that we have today. Certainly, things got way more sophisticated post-Industrial revolution, the information age. But now in particular - and I want to say in the last five years, 10 years - first, because of the heavy impact that we see it having, there is this democratization of information that's taken place in the world, which is a generally very positive thing to see happen. Smaller media outlets are getting large, amplified voices, the ability to accelerate their messaging across broad spectrums of audiences. 

Irfan Mirza: And so as we think about that happening and then we think about the tools that we're making available for our customers and not just Microsoft - I mean, the industry in general, tech industry in general - these are, like, tools like AI, machine learning, bot technologies. These are all things that are being developed with a certain, I'm going to say, noble purpose in mind. But these tools are also being leveraged to scale not just commerce, not just the dissemination of news and information, accurate information. They're also being used to conduct cybersecurity attacks and to wage campaigns of disinformation. So this is why it becomes a big problem for us to go look at now - is to say, what are the things that we need to do to make sure that all the tech that's being developed and the, you know, mass media capabilities that we all have at our fingertips - that those are now being managed in a responsible way and that each of us becomes a steward of what we believe to be the right thing for us to go do in the face of a cybersecurity attack. 

Nic Fillingham: Irfan, you mentioned a couple of examples up front around sort of election security. And I think when I hear the word disinformation, I immediately think elections, and I sort of immediately think public health policy. Can you give us - maybe I'm jumping ahead here, so feel free to pull us back if we're a bit further down the path than we need to be. What are some examples of disinformation in the enterprise? So as a Microsoft employee, I'm opening up my Outlook, and I'm receiving email to my Microsoft account. Am I looking for - should I be on the lookout for emails that may be trying to persuade me to do something, maybe in the sense of, like, a phishing campaign or - you know, help me understand how this sort of actually plays out in a sort of a practical and tactical sense in the enterprise. 

Irfan Mirza: Yeah. To look at how it works in the enterprise, we might want to look at just how it works in general. One of the marked differences of disinformation that we're seeing - campaigns in particular - is that there are those that are overtly orthogonal, I want to say, or, you know, out of the universe. But then there are those that are nuanced, different - nuanced in such a way that they differentiate ever so slightly from the accurate truth or from the information piece itself. We've seen this type of attack in the past where people have sent emails, for instance, phishing for information. And rather than using a microsoft.com domain, they might use it as a micorosoft.com domain, M-I-C-O-R as opposed to M-I-C-R-O. And those subtle nuances people might miss. They might not be so alert. 

Irfan Mirza: It's one thing that I try to - you know, if I'm an attacker, I'd publish some information that says aliens have landed from the planet Janus, and this is what they're doing. And, you know, chances are people might not believe it. But it would be more believable if I took some actual event that was happening and I tried to fabricate some nuanced change in it, right? So it's the nuances that are becoming now the area to worry about because as we get better media literacy, as we get, you know, better information on the sources of information that we're consuming, we now have to face the next challenge, which is around the nuanced differences that disinformation is putting in front of us. 

Natalia Godyla: I love that you brought up that alien example because it's a perfect segue to a question I've been sort of stewing on. So yes, there are definitely examples of disinformation that are easy to tell - disinformation or misinformation. It's easy to tell that there aren't aliens. Or for most of us, it's easy to tell. But... 

Nic Fillingham: What? Hang on. What? 

Natalia Godyla: (Laughter). 

Nic Fillingham: Are we really going to go down this path again, Natalia? How much evidence can I put in front of you before you're convinced? 

Natalia Godyla: I'm sorry. I need several more presentations of aliens. 

Nic Fillingham: All right. We'll keep working on it. I'm sorry. Keep going. 

Natalia Godyla: (Laughter). But in reality, many of us are starting to get nervous that we can't tell the difference between truth and not. And I think this is, like, a macro question for all of us as citizens of the world. But I think security teams also have to consider how to train their users to tell the difference, which is a really tough ask to do. So, you know, how can people listening to this podcast start thinking about teaching users to figure out what's truth, what's not? 

Irfan Mirza: Yeah, this is a really good question. I mean, when you think about truth sort of in an absolute sense, it's very difficult to ascertain no matter what, you know, from a philosophical sort of point of view. And to Nic's point about, what do you mean that there aren't aliens yet... 

Natalia Godyla: (Laughter). 

Irfan Mirza: ...Do you specifically mean that aliens are not attacking this week? 

Nic Fillingham: That we know of. 

Irfan Mirza: (Laughter) OK. That's yet another one. 

Nic Fillingham: (Laughter). 

Irfan Mirza: So I think the critical thing here for us is this question you ask about, what is it that we need to sort of do in order to be able to differentiate truth from fabrication? I think this - it's very difficult for people that are sort of intermediaries in the media business, in the information business, to sit and be in judgment of what should pass and what should not pass, what should be flagged and what should not be flagged. Censorship has always been something that humanity in general has always struggled with to a great extent, because where one person's rights probably stop is where somebody else's are infringed upon. This is often the case that, you know, you have your right to speak. But you, perhaps, don't have the right to yell fire in the middle of a theater when they're, you know, watching a movie and there isn't a fire. 

Irfan Mirza: I think that's one of the challenges that we're seeing with disinformation is, you know, what's the mechanism for regulating it? And if there isn't a mechanism for cleanly regulating it, what are the tools that we can use - to your question - to make ourselves smarter and more aware? Well, we certainly don't want to get phished. We don't want to lose our credentials and email to somebody that's poking at us in different ways and pretending to be somebody else. In the same way, we don't want to consume information that, perhaps, we didn't sign up for or that we didn't realize was slanted or nuanced or disinformation-ized - can I say that? - in a way, right? So - and there, as I said, intent matters. But also, outcomes matter. 

Irfan Mirza: Disinformation poses this threat of uprooting, I want to say, like, the normal course of commerce, right? Competition's goal is also to do that, you know, it's to say, I'm going to go take a bigger piece of the market or pie or whatever it is, make, you know, more efficient profits or whatever. Commerce has always been about that. But there's a normal sense of fair play, I want to say, that we've built into our social interactions as well as our commercial interactions, right? We don't tolerate, as an example, anti-competitive behavior. And we frown on it. 

Irfan Mirza: And we look - so I think it's things like that, that disinformation tends to circumvent those controls that are ordinarily there by trying to manipulate a large mindset of people in order to change the normal course of events. I think, to be cognizant of that, to be more literate about what's happening, to have, perhaps, a more critical mindset - to say, hey, I got a piece of information. Is it really the information that - does it makes sense? And are there other sources that I can, you know, validate that from or with? Or, you know, is it a matter of time before I accept it, that I have to see sort of where all the other sources of information are pointing to? 

Nic Fillingham: Irfan, there's, like, three giant questions I want to ask you. The first is, you know, what is the role - and I'll ask the questions, and then we can decide the order. What is the role of AI in helping us to discern the legitimacy of information either now or at some point in the future as the technology improves? Tell us more about this fascinating idea of cognitive hacking. I want to learn more about that. And maybe something that might tie all this together is - this is a security podcast. The MDDR is a document for security professionals. Why should security people be - and cybersecurity people be thinking about and caring about disinformation? What - how does it impact their function and their day-to-day role? 

Irfan Mirza: OK. Yeah. These are big questions. 

Natalia Godyla: (Laughter). 

Irfan Mirza: Let's try to tackle them one at a time. I'll start with the cognitive hacking because I think that might be the place to start to talk about why AI becomes important. 

Nic Fillingham: Cool. Great. 

Irfan Mirza: OK. Cognitive hacking is, basically, the idea of using disinformation and packaging it in such a way that you're going to go and change - your expectation is to change how people perceive something or the outcome or the information itself. It's kind of like hackers do when they try to guess your password. They go in, and they try a million different passwords. And then they find one that you've used that's, perhaps, not as strong or as well thought out. Perhaps there was more common language in it, language that comes out of dictionaries and so on. 

Irfan Mirza: Cognitive hacking is basically hackers trying to figure out - how would something be perceived? - and then try many different forms of disinformation until they get one that works - OK? - and then to try to scale that and get it out to the masses to say, hey, you know what? I've been able to sort of launch my campaign to change an outcome. As I said earlier, it's to change behavior. But the important things of the cognitive hack are that you can think of disinformation as the payload in a cyberattack. It's sort of, like, what sits in the warhead when it's delivered. And that attack has an attack vector, right? 

Irfan Mirza: Take a campaign, for instance, that somebody launches. It's got direction. It's coming from someplace. It's targeted to someplace. The targets could be very small, what we call micro targets. Or they could be broad. And it has a blast radius, meaning that there is a domino or a cascading effect when the payload hits, that it will change a certain percent of perception or a certain number of people's minds or mindsets on something. And then that will have either a reverberating effect, meaning it diminishes over scale and time. Or it could have a cascading or even compounding effect, meaning that it comes back. The more time that goes by, the more that conspiracy theory, that disinformation, fuels itself within a certain population or population segment. 

Irfan Mirza: And all of these could result in failures, right? And so that failure is something that we're worried about because at that point in time, the enterprise itself has been breached. It's been breached with an attack, perhaps, that's not malware as we're typically used to it. But it certainly is, you know, very intentionally malformed piece of information, the goal of which is to alter the outcome of normal commerce. So it's not all that different than a DDoS attack when you think about it because the goal of the DDoS to be able to stifle your service so that it can't provide normal course of transactions, right? 

Irfan Mirza: And so this information's purpose is to try to create that same kind of a flurry of activity that perhaps you won't be able to provide that normal course of response or action that you would need. So having covered that, the tools that they're using to do this - they're using bots to be able to recreate instances of information sources or these disinformation campaigns. As I said, they have a vector. The vector has an origin. The origin is orchestrated by a bot at very, very fast speeds. They're using AI to ever so slightly nuance the disinformation in such a way so that our detection, our traditional detection tools are bypassed in some cases where they - where we can't say that, oh, this is a replica of something else because they've used natural language processing capabilities, cognitive sciences, behavioral sciences into their algorithms in such a way that you might think that, wow, I'm getting genuine information from multiple sources. This must be accurate. 

Irfan Mirza: And the response to it - what do we have to do? Well, think of the consequences. At a minimum - and this answers your second or your third question, too, Nic, I think - at a minimum, enterprises have to deal with the delays that are put into the process because employees might not have confidence in leadership. They might not have confidence in a particular mitigation or in a particular set of actions that the company's taking. And as a result of that, there could be hesitation. In seconds of hesitation in a rollback or a failover scenario or in a decision to pull the plug on something or to implement a control of some kind, these could make a huge difference. So I think the attacks themselves have purpose. They're either outright trying to change an outcome or, at a minimum, trying to introduce delays or a lack of confidence in the decision-making. And all of those affect enterprises. And those are, as I said, outside of the course of sort of normal commerce. And so that's why they become bigger concerns to us. 

Nic Fillingham: Irfan, you talked about the role of AI maybe on the attacker side. Can you talk about what's happening for the role of AI on the defender side or perhaps in terms of - you know, I could see - I wonder if there's an opportunity for machine learning here to help determine the sort of the disinformation-ness, the disinformation score of something. Like, if disinformation is now something that security practitioners need to be aware of, what tools do they have now or might they have in the future that will allow them to determine if a piece of information is accurate or not and whether they should think of it as a vector, as a payload in a potential threat campaign. 

Irfan Mirza: This is a very important question. I mean, when you think about how AI is being used today by the sort of attackers, there is a comparable role for enterprise or a counter-role, I suppose - a better word - for enterprise to say, hey, how do I use AI to determine whether something is fabricated? And the degree of disinformation, I think, is what you were alluding to - what you said - the disinformation-ness of something. 

Nic Fillingham: (Laughter). I borrowed the idea of truthiness from Stephen Colbert. And I just wondered if I could add -iness onto the end of disinformationiness. 

Irfan Mirza: I see. So it's disinformation... 

Nic Fillingham: I mean, I should've said accuracy. I mean, that's the word that actually exists. But anyway... 

Natalia Godyla: (Laughter). 

Irfan Mirza: Yeah. So I think the critical thing here for us is if we think about the degree of disinformation - so as I said earlier, there - you know, information is manipulated or changed with nuance, nuanced differentiation between one piece of disinformation and another. AI is certainly capable of detecting a lot of that nuance differentiation that happens. So that's one thing that we could certainly be employing a lot more of. I think a lot of media outlets now have sort of become self-aware, I want to say. And they have started reporting by themselves what their sources are. Most information now that's being shared is being, you know, marked as advertising, as an example, on public-facing sites and social media. It's being marked for its source as to where it's coming from. 

Irfan Mirza: And so I think improvements in the state of journalism, in the state of media, in the state of advertising, in all of the information sciences that are out there - these are all helping. But we also, as recipients of it, have to know to look for the signs. And we have to be able to mark our favorites. We have to be able to mark those that we trust - our trusted sources as opposed to new sources that perhaps we haven't yet - that have not earned our trust yet. And so I think there's a new currency that comes about, which is the currency of trust. And in the model of zero trust, you cannot assume to trust a piece of information or a source of information unless you've had the ability to sort of validate that and say, you know, is this really trustworthy? And if it is, what's it asking me to do? I think those two - that combination is what we've got to now build into our own thinking, as well as into our tooling - is, what is it that this information is going to be asking people to do? - not just, where is it coming from, and what are - you know, what's the differentiation between it and all the other pieces of information that are out there? 

Natalia Godyla: This is going to be a big question to end on. But, you know, what do you foretell as the next evolution in disinformation? You know, we've talked today about the potential of new technologies to help solve this problem, some - like you said - feature suggestions. There's the ongoing education campaign that needs to happen not just from the enterprise but globally. But where do you think a lot of the effort will be placed in the coming months, years? 

Irfan Mirza: Oh, the look into the future... 

(LAUGHTER) 

Irfan Mirza: ...A little with a crystal ball. Let me think. Your guess is as good as mine, probably, Natalia. But I would think we need to get smarter. And I don't mean - I do mean more aware, but I don't mean that's the only thing that we need to do. We need to be able to leverage what I would characterize as both deterministic and nondeterministic methods. You know, the algorithms that we're running for detection, we need to include those in all of our defense mechanisms that are out there. But we also need to have this notion of expecting the human factor to be able to make decisions differently than the algorithms themselves are capable of doing. I think AI has gotten to the level of sophistication, both in terms of what the attackers are using and in terms of the defense mechanisms that we have available to us, to perhaps maybe even achieve a stalemate. 

Irfan Mirza: And I believe truly the differentiator will be human ingenuity, the ability of a person to instinctively, or perhaps not even cognitively, make a call to say, you know, something just doesn't seem right; something smells fishy here. And to try to change the course of an attack by throwing in what we call sort of the human variable, the indecision, the human factor, the random number that we're capable of producing at the drop of a hat. I think that can certainly thwart a well-orchestrated, highly deterministic algorithm. And so we've got to make sure that we don't take people out of the equation. And I think people end up becoming a very important part of the equation not just in terms of education awareness, being the consumers of disinformation, but also as actors who are thwarting the campaigns and the attacks that are coming in. 

Nic Fillingham: Irfan, before we let you go, you know, my big takeaways from this conversation we've just had here is that disinformation is - as you said, it is a payload that security professionals need to now be sort of on the lookout. The outcome of potential disinformation campaigns is this idea of cognitive hacking. And, you know, we really have to be thinking about this space a lot more than what we really are, and it's something where, you know, there'll be a lot of further investment in the future. You know, what would be your takeaway for folks listening to the podcast today, apart from, obviously, go read the report and learn more about it? What would you like security practitioners to come away with sort of knowing or thinking about from this episode? 

Irfan Mirza: I think that's a pretty straightforward one. I mean, as you say, this is an informative conversation that we're having. I'm learning as much about, you know, the subject as you are, as we're talking about it, you know, trying to think out loud and trying to sort of, you know, apply whatever critical thinking and experiences we have to the space. But I always want to go back to the fundamentals of things that we absolutely know to be true, and one of them in security is that the hackers and the attackers will always go after the weakest link. It's the lowest-hanging fruit. It's the thing that they can break the easiest, that they will use to get their foot in the door, to plant something, to plant malware, to plant disinformation. And the information pipeline is something that they're looking at very closely because it contains the intellectual property that's being broadly and massively shared by others. So perhaps the protections that are put on it are not as strong as the protections we have around the corporate perimeter, for instance, or around our corporate resources and assets. So it naturally looks like a weak link to attackers. 

Irfan Mirza: And so my guess would be that that would be a place where they would try to get a foothold or try to jam their way into a more secure environment, such as an enterprise, is through that pipeline. So securing the pipeline, even if it's secondary information, becomes really, really important. We cannot treat it - that pipeline as a secondary set of systems. Otherwise, we compromise all of the brilliant security work that we're doing to protect our own intellectual property and our commerce and our customers and our data and everything else. So that would be the place for me to go back to. My takeaway from this is let's not let the information pipeline be the weakest link. 

Natalia Godyla: Well, thank you for that, Irfan. It was fantastic to have you on the show again. Thank you for joining us. 

Irfan Mirza: It's a pleasure to be here, a pleasure to talk to you both. Thank you so much. And thank you for asking these very, very difficult questions. 

Natalia Godyla: (Laughter). 

Irfan Mirza: I'm going to have to ponder the answers going forward. 

Natalia Godyla: Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode. 

Nic Fillingham: And don't forget to tweet us at @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe. 

Natalia Godyla: Stay secure.