The Microsoft Threat Intelligence Podcast 10.11.23
Ep 3 | 10.11.23

Exploring Mobile Threats


Sherrod DeGrippo: Welcome to the "Microsoft Threat Intelligence" podcast. I'm Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cybercrime, social engineering, fraud? Well, each week, dive deep with us into the underground. Come here from Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries, and shape the future of cybersecurity. It might get a little weird. But don't worry, I'm your guide to the back alleys of the threat landscape. Hi, everybody. It's Sherrod DeGrippo. We are here with Christine Fossaceca. She is a senior mobile security researcher at Microsoft. She specializes in iOS. And her background is in mobile exploit development, forensic techniques. She's on red teaming, reverse engineering, penetration testing. Right now, she's on the Defender for Endpoint team analyzing iOS zero days, and developing features for Security Copilot. She also is an experienced podcaster. She has her own podcast "HerHax" which she co-hosts. And she provides advice for those who are interested in a career in entering cybersecurity field. So, I'm interested in learning all kinds of things about Christine. She's had a ton of public-speaking experience, DEF CON, REcon, Summercon, ShmooCon, WiCyS, Diana Initiative, and Objective by the Sea, which is so exciting for me because I love it when we have really talented speakers come and talk to us about what they do. So, Christine, welcome. Thanks for joining us.

Christine Fossaceca: Thank you for having me.

Sherrod DeGrippo: I'm really excited to learn from you about iOS security. I don't know anything about it. I have an iPhone. I run my updates, but, like as an example, can you kind of tell us about like what's a mobile zero-day?

Christine Fossaceca: So, there's definitely a range of iOS threats that as end users we have to be aware of. So, a zero-day, I think the way most people think about it would be like nation-state malware or PSOA private sector offensive actor malware. But I think even the smallest of bugs can technically be a zero-day because a zero-day is just something that hasn't been previously reported, and is being exploited on that zeroth day, so to speak. So, like back in 2018, there was a Touch ID exploit that was being used by some scammy fitness apps that were passed through the app store, and so people were downloading them, and then it was using Touch ID to charge the end users a bunch of inauthentic payments. So, that's why Apple then made a change to the way that they do payments where now it's double-click to pay so you can't just, you know, accidentally Face ID pay or accidentally Touch ID pay, which technically that's also a zero-day. So, it's really any time there's some kind of exploit out there in the wild that happens. But it has to be an exploit, it can't be just like a bug. I don't consider a bug that is an exploitable necessarily as zero-day. That's just a bug that has to be fixed.

Sherrod DeGrippo: So, tell me for somebody who -- I love my phone and my iPad and everything, what do I need to know in terms of protecting myself on my iPhone?

Christine Fossaceca: So, a couple of things. One of the physical threats that I've been seeing reported in the news over the past six months is actually people getting their phones stolen from them at bars, but it isn't just somebody stealing your phone because they want your phone, they're stealing it because they want to steal your money. And so, they'll kind of like scope you out, look over your shoulder, and watch you type in your PIN, and then, as long as they know your PIN, they're able to change your iCloud password. And then it becomes an iCloud account takeover. But the other part with that is your passcode is sort of like a two-factor, second factor with a lot of apps on your phone so your banking apps, your authenticator apps, all of those apps you can actually unlock with Face ID, and if the Face ID is not working, it defaults to the passcode. So, if you have a six-digit PIN, and they get that PIN, what was being reported in the news was that those PINs were then being used to log into banking apps, steal banking credentials, and then also blocking the users immediately out of their iCloud account so that they couldn't remotely wipe the devices. So, it was like pretty, pretty scary. But the best way to prevent against that is to have a really long passcode for your phone. So, it should be alphanumeric, it should definitely have letters, numbers, symbols, and that don't have, you know, 123456 as your PIN. Don't even have only numbers as your PIN because that makes it too easy, and we have to make it harder for threat actors and malicious people, bad guys out there who are trying to do bad things. We don't make their jobs easy.

Sherrod DeGrippo: There's a great joke in Spaceballs, the movie, where the code to the airlock is 12345, and the guy says it sounds like what an idiot would have for his code for his luggage. So, Spaceballs is old now like me. If you haven't seen Spaceballs, there's really some good physical security and password complexity jokes in Spaceballs. So, that's really interesting. And you brought up two-factor and a second factor. And so, I have a question for you that is a perennial fight on Twitter that I always get into and drive people nuts with but when we talk about two-factor, do we consider SMS a factor? Is it really two-factor if it's using a password and the second factor or "factor" is a text message?

Christine Fossaceca: Yeah. Actually, so really interestingly, I saw Mudge give a talk on SMS at Summercon this past summer, in July. And he talked about a study that was done on SMS. And just like what kind of threat vector it was to people out there. And so, in the research that was done, about 3 billion password-stuffing attacks happened and they tried to leverage SMS. About 12 million bulk phishing attacks last year also trying to use SMS. But then the targeted attacks, that would be like a SIM-swapping attack, that's less than 10,000. So that is orders of magnitudes less. So, if you're, you know, changing your passwords and if you're not clicking on phishing links, I think SMS is an okay factor to use, it's not like the most secure compared to using an app such as Authenticator because Authenticator is generating those codes on an app that is more secure than a text message because we know that text messages are sent in the clear. So, like there are ways for text messages to be stolen. But from a target factor, you as a regular user are a lot less likely to be highly targeted or your SMSs are a lot less likely to be highly targeted through like some kind of attack or somebody knows your phone number and is trying to steal your phone number, or even like trying to intercept your messages, it's really, most of us have to worry about phishing and we have to worry about automated password stuffing, but we don't have to worry about, you know, somebody like man in the middling our SMS because that would be a targeted attack where they're looking for my specific code.

Sherrod DeGrippo: I think too for people listening like I think it's pretty fair to say that if you're in a security or a threat intelligence role, you might actually be more targeted because we've seen that, especially with the crimeware actors. APT as well obviously, but the crimeware actors have gotten really brazen in terms of going after researchers and threat intelligence analysts. We've seen, you know, people posting on social media saying, you know, these threat actors are threatening us like at our homes and sending emails to our corporate owners, and all of these things. So, I think that there's is the potential to really think about if you're in a security role that's elevated if you're a CSO or something. So, in terms of really targeted things, we've seen some crimeware actors, and obviously, the APT actors will really target people that are in threat intelligence roles, security roles, operational roles in IT, as well as executives like CSOs are certainly getting some of these targeted attacks that might leverage the mobile device as the text message second factor. So, I think ultimately, really what we want to tell people to do is to use something like Microsoft Authenticator, or whatever authenticator app of your choice makes sense for your environment. It's just you want something that seamlessly works really easily to get you a true two-factor, instead of kind of the text message, which I would kind of put as like you're not really two-factor, you're like one and a half. Like you're not quite reaching the bar of the second factor or the text message because there's some questions about if that's really something you have, right?

Christine Fossaceca: Yeah, I would agree. I feel like for people who don't have anything set up and they are -- even if it's not a high barrier, but, you know, for some people, they just don't want to add an additional thing. Like my grandma, I would be like, yeah, at least use something to -- use SMS. But for, you know, people that work in security, I think that we definitely should be using Authenticator because it's much more secure.

Sherrod DeGrippo: So, now we're going to move to the part where I'm going to ask you some trivia questions. Which I prepped you for but I did not tell you what the questions are.

Christine Fossaceca: Yeah, I'm definitely nervous.

Sherrod DeGrippo: You should be nervous, you should be terrified. This is the scariest thing that we will do on the spot.

Christine Fossaceca: If I get it wrong, let's just cut it out.

Sherrod DeGrippo: I think we should leave all of this in, especially you saying that.

Christine Fossaceca: Okay. I'm going to be exposed.

Sherrod DeGrippo: Here we go. Which of the following is not a tool that is used for iOS reverse engineering? A, Frida; B, Radare2; C, Nmap; D, Ida Pro.

Christine Fossaceca: Oh, okay. So, should I explain what the tools are? Like I can actually like --

Sherrod DeGrippo: Yeah, I would love to have some context like I've never heard of any of these, except Nmap and Ida Pro, so what is Frida?

Christine Fossaceca: So, I'm going to say just like off the bat, the answer is Nmap. But --

Sherrod DeGrippo: Correct.

Christine Fossaceca: It doesn't mean that somebody can't use it because I've definitely done network things with iOS devices. So, to reverse engineer iOS like an app, you want a static or dynamic analysis tool. So, Frida is an amazing dynamic analysis tool. I actually gave a talk at Objective by the Sea so, I don't know, maybe that's why you had that question. But I gave a talk at Objective by the Sea on our reverse engineering iOS with Frida. Then Radare2 is just another static analysis tool. I don't really use it because I always had Ida but like I, you know, admire people that do use Radare, it's definitely a good tool for static analysis. And then, you know, Ida Pro.

Sherrod DeGrippo: For people who are listening who aren't necessarily reverse engineers, I think it's really interesting the way you used your tone to say, "I've always had Ida." Because hashtag blessed, right, hashtag luxury. Most people that don't do this work in reverse engineering for like a vendor like we do are not laying out the heavy, heavy cash to buy Ida Pro. So, it is a bit of a status symbol to say, "Well, I've always had Ida." So, I think that's really interesting. There's a divide. There's a divide.

Christine Fossaceca: Oh, there for sure is. And I was pretty spoiled I think. So, I got my first job, not because I actually knew what Ida Pro was or even how to do reverse engineering, it was just because I knew assembly. So, I was a computer engineer, I had a really amazing teacher that was like, you know, always trying to trick us on these assembly questions. So, we'd get a test and we'd have to know the assembly instruction so well that you'd see them on paper and have to write more instructions like on paper and figure it out. So, it was one of the hardest classes I've ever had but it definitely burned it into my brain. So, for my first job, the interview that I had with them, they were like, "Do you know what Ida Pro is?" And, you know, never lie in an interview so I was like, "No, never heard of that." And then they were like, "Well, do you know assembly?" And I was like, "Yes." And then they opened up Ida, which showed an assembly view, and they were like, "All right. Can you tell me what this program is doing?" And I was like, "Oh, yes, I can." And so, again, with that job like I was so lucky because I kind of got to learn how to reverse engineer. I knew assembly but I didn't know like the concepts of reverse engineering. And we were pretty flushed with cash so we got to use Ida Pro licenses with all the decompilers, and we basically used them for CTF. And we like played DEF CON CTF.

Sherrod DeGrippo: There are a couple of things that are definitely indicators that somebody is working somewhere with money when they're like, "Yeah, we have Ida licenses, no problem." So, that's really interesting. And tell me like do you have a preference when it comes to reverse engineering platforms for iOS like do you like Ida best? Do you like Frida? What's your favorite?

Christine Fossaceca: Well, for iOS, I definitely feel like I use Ida and Ghidra for static analysis. And Ida just does a little bit of a better job with iOS in terms of like the decompilation of strings. Like there's a lot of things that get mangled, especially with like Objective-C code. And I think Ida just does a better job analyzing it right now. Frida is really good for dynamic analysis, not as much for static because it's like you don't have a UI for it, but for dynamic analysis, it's amazing because I can kind of not know what's important and hook a specific function, and then I can -- like it helps me winnow down when I'm building a haystack. So, I usually do kind of like, I call it like a catch-and-release method where I try and hook a few interesting functions. I actually, on the device, I see what gets triggered, and then once I find a few select functions that seem like they're being touched, I move into static analysis mode and I'm like, "Okay. Now I'm going to reverse engineer these five functions out of, you know, maybe 100 that I had hooked."

Sherrod DeGrippo: And so, when you're talking about iOS static versus dynamic, is it -- you know, my background is much more generally like on host, right, not on mobile device, so is it kind of the same thing where like you think static analysis is like looking at the actual code line by line by line versus dynamic analysis where you're actually executing that code in some kind of Sandbox environment or something like that. Is it the same for iOS?

Christine Fossaceca: Yeah, absolutely. So, a lot of my iOS reversing experience has been trying to do capability development. So, I'm trying to figure out how to leverage the system in a way that maybe it shouldn't be used so it's analyzing iOS system processes. So, I'll start by being on the actual device and just like doing certain things on the device to try to get the outcome that I want, so like when I was trying to figure out how Pegasus worked, I was like, okay, I am going to create a PDF, rename it to GIF and then send it in an iMessage and see what functions are triggered. And then from the static analysis standpoint, you can either pull binaries directly off the device or you can actually download like system binaries from there's like a huge database online. So, once I figure out what functions are being triggered, then I'll pull down those static binaries and I'll just be on my computer, you know, reversing statically the actual binaries themselves.

Sherrod DeGrippo: Wow. I've worked with a lot of people that do that work but never anyone that does it on iOS before. So, it's pretty cool to hear about like the differences and similarities of reverse engineering and, you know, application analysis. And let me just clarify too, you're talking malware, right? Like you're looking at malware pretty much or suspected malware?

Christine Fossaceca: Yes. So, in my current role, we definitely look at malware. So, in my previous roles, it was a little bit different but now I am kind of reading code that looks similar to code that I may have written previously. So, it's definitely interesting seeing how other people are leveraging like the iOS system and seeing kind of like the different ways that they're exploiting the system because iOS changes so rapidly. Apple is a huge innovator in the security space when it comes to mobile devices. So, they over the past decade have literally created entirely new security concepts, similar to how we've done that with Windows, they've done it with iOS. And so, they've created all sorts of strategies for security like entitlements, they have the Sandbox, they have pointer authentication, things like that. And it's definitely interesting seeing a lot of this novel malware from advanced threat actors, and understanding how they bypass some of these really advanced mitigations.

Sherrod DeGrippo: Okay. So, I agree, Apple has done incredible cool stuff, especially in the authorization where they're able to tell like what binaries are vetted early. And I think that there is some element of the Cathedral and the Bazaar that we won't necessarily get into here to fight about like walled garden stuff. But I have another trivia question for you.

Christine Fossaceca: Okay.

Sherrod DeGrippo: What common technique do phishers use to make URLs appear legitimate by using characters from different languages? A, character encoding; B, URL masking; C, homographs, or D, ciphers.

Christine Fossaceca: Part of me wants to say character encoding but there is -- I don't know if a homograph is the same thing as a glyph, so I think I'm going to say A.

Sherrod DeGrippo: Character encoding?

Christine Fossaceca: Yeah.

Sherrod DeGrippo: The answer is homograph.

Christine Fossaceca: Okay. Is that what a glyph is?

Sherrod DeGrippo: Yeah, I think so. It's using different characters to make it appear like it's the URL that you're trying to go to.

Christine Fossaceca: Okay. Yeah, because there was actually just like back when I played a lot of CTF, there was this crazy problem where somebody created their own font that had glyphs in it. And so, you actually had to reverse the font to understand the glyphs. But in my head, it was like encoding because we were up at like 3 a.m. and I was, you know -- I was like trying to fuzz one problem and then like looking at this font. And sometimes when you're like playing CTF, you just hit a wall because you're just so tired of thinking and it's really late at night. And so, I forgot that I had installed the font in my Linux text editor and I was like, "I'm just going to generate some fuzzing strings for this other challenge." And so, I started typing on my computer A, A, A, A because I was going to send like, you know, a bunch of As to start trying to exploit it. And suddenly, all of the As became an f like the capital As became lowercase f. And I was like what? And then I was like B, B, B, B, B, and it became an L. And then C, C, C, C, A. And so, I was like, Andrew, one of my really, really good friends, he's way smarter than me. Andrew, I was like, what is happening? And he was like, "I don't know. Just keep doing it." And instead of reversing the font, I had stumbled upon the glyphs that were being used to encode the flag and so if you went through the entire alphabet and actually printed out the flags, so that was like really, really cool.

Sherrod DeGrippo: Oh, my gosh. That's a great like CTF challenge. I love that somebody did that and built a font for it.

Christine Fossaceca: It was so funny too because somebody was like, "Oh, Christine, how did you figure that out?" And then when they found out that I didn't actually reverse it, they were a lot less impressed.

Sherrod DeGrippo: Oh, that's why you have to keep your methods secret.

Christine Fossaceca: I was excited. I was like I've got a flag. I don't care.

Sherrod DeGrippo: I have another trivia question for you. This is your last one. So, you're one and one.

Christine Fossaceca: Oh, no, all right. I'll find out if I'm the real deal.

Sherrod DeGrippo: I think you'll get this one because you mentioned this earlier. What term describes a phishing method where attackers intercept communication between two parties to steal or manipulate the data being transferred? A, attacker in the middle, formerly known as the man in the middle; B, watering hole attack; C, session hijacking; or D, drive-by download.

Christine Fossaceca: Oh, man in the middle or attacker in the middle, or she in the middle. It's depending on --

Sherrod DeGrippo: It's Malcolm in the middle.

Christine Fossaceca: I like that. It's Malcolm in the middle.

Sherrod DeGrippo: We have a bunch of cool attackers in the middle blogs on the Microsoft Threat Intelligence blog. Go check those out. They have what I'm like a -- I love a nice graphic for an attack chain. And they have some awesome attacker-in-the-middle attack chain graphs -- graphics on those blogs. So, go check out the Microsoft Threat Intelligence blog and just put AITM and you'll see these you can steal them and use them in your own PowerPoints that you still like present to your boss or something and explain a certain attack chain. I love stealing graphics from the Microsoft Threat Intelligence blog.

Christine Fossaceca: I'm going to do that because, yeah, I definitely as much as I love, you know, making PowerPoints, I am not the artist that others are.

Sherrod DeGrippo: Unless you're doing original threat intelligence work, just go take graphics from the Microsoft blog. Microsoft Threat Intelligence blog has all of these cool attack chains beautifully made that you can just screenshot them, put them into your own presentation, cut your work in half. So, I want to ask you about something on your sheet that we talked about which is the Bluetooth's zero-day from DEF CON. So, DEF CON was about a month ago, everyone was getting these messages on their phones about connecting to a new device, and everyone just said, "Okay. This is a new Bluetooth oh-day going around, what was the deal? What was actually happening with that?

Christine Fossaceca: Oh, my gosh, this was so interesting because there is all of this like kind of hype around it. So, I was in a group chat with a bunch of people and I don't know if somebody was like trying to spread misinformation about making the threat seem like more than it was but there was like some screenshots being passed around saying that somebody had been exploited with this and someone had gotten their phone number and was texting them, and it was like a whole thing. But I was seeing those pop-ups. And I help run the Hack Fortress competition at DEF CON so we were all seeing it on our computers, then we were like, "Oh, this is weird". And so, people turned to me and they were like, "Christine, you are like the iOS expert and actually do Bluetooth research, specifically for iOS and their continuity protocol." And they were like, "What is going on? What is this?" And I was like listen, I don't think this is like -- I see these pop-ups but I am not concerned because to create RFE from Bluetooth is like pretty large. So, I looked into it. And so, basically what the person was doing is there's this feature on Apple TV and it allows Apple TV to connect to the captive portal. So, what will happen is if you plug in your Apple TV to like a hotel, it'll say Apple TV needs more information. And so, it starts sending out these beaconing packets, and then your device will get a pop-up receiving from the Apple TV. And they'll say, you know, Apple TV needs more information. And then you're supposed to click the link on your device and authenticate to Wi-Fi that way. So, I actually am going to recreate this to see exactly what information is being passed in the clear and what kind of information is actually sent back to the Apple TV endpoint or, you know, spoofed endpoint if any. Because, you know, it is pretty easy to spoof Bluetooth messages but I don't think there's any kind of like remote code execution that happens from this. Like there is no remote code execution happening from this. What most would happen is they would steal like your Wi-Fi password. So, it wasn't like I think as big of a deal as people were making it out to be. There was an article where they interviewed the person who did the Bluetooth thing and they actually quoted one of my research papers on Bluetooth. So, that was pretty interesting. So, I definitely think, you know, on one hand, it's great that they seem to have good intentions on reminding people to turn off their Wi-Fi, reminding them to turn off their Bluetooth, but on the flip side of that, I actually always tell people to leave their Bluetooth on because the only way to protect yourself against malicious trackers is to have your Bluetooth turned on because you need to provide either Apple's anti-tracking framework or if you use an anti-tracking app like AirGuard, it needs to have Bluetooth so that it can see if anybody is stalking you with an Air Tag or a Tile, or something like that.

Sherrod DeGrippo: I want to talk about Bluetooth stalking for sure. Before we do that, let me ask you a quick question. So, are you saying -- you can tell I'm like really going off script here because I don't know anything about this. Are you saying that remote code execution over Bluetooth to an iOS device is an easy possibility or just like really rare?

Christine Fossaceca: I think that would be rare because of the steps that it would take. So, you would first need to find a way -- so, you're sending this frame and so most frames are being parsed by Bluetooth D, which is like Apple's binary -- system binary called Bluetooth D. And so, Bluetooth in general is not a super large attack surface because you're very limited by packet size so I can't try and buffer overflow you by sending you 1000 bytes like that's actually not allowed by the Bluetooth specification. And so, these Apple messages are actually Bluetooth low energy which have even smaller payload. So, like the maximum amount of bytes that can be sent is 70 bytes. So, Apple, once you send all the Bluetooth classic and then like there's all these Bluetooth low energy required segments that have to be sent in the frames, when you get down to the actual amount of bytes that a vendor can send in a vendor-specific section of the packet, it's very small, it's like 27 bytes. And I give a talk where I'm explaining how Apple fit a 28-byte P-224 elliptic-curve encryption key in a 27-byte frame. So, like there is -- it's very limited, like you can't just send more. So, first, they would have to find a way for that 27 bytes that's being parsed to then exploit some kind of bug in Bluetooth D, and then when Bluetooth D is being parsed like that bug has to be enough code that something else has a bug and you have to chain all of these bugs together and send enough data that then eventually you send remote code execution code or you're calling out to a URL that like downloads RCE. So, it's nontrivial. And I think some of the claims on this DEF CON thing were, "Oh, they got my phone number", which is -- that was a known bug in some of the continuity messages where people's phone number was being transmitted if they were AirDropping like if you're AirDropping, other people can sniff those messages --

Sherrod DeGrippo: I do that a lot on the airplane. I just send pictures of my dogs out on AirDrop to people who have it on.

Christine Fossaceca: That's so funny. Well, I think there's ways that people can get your phone number from those AirDrop messages. But for what they were claiming was that the person was then like tracking that person's location so they were like, "Oh, I have some kind of persistent threat on your device where I'm now like GPS tracking you." And I was like no, that's not possible like my GPS can't even tell me where I am when I'm asking it so how is it telling you with that type of accuracy? I think there is like a message saying, "Oh, I see you're not going to the talks and instead you went to the merch booths." Like that's something weird and I was like this is definitely like fake news.

Sherrod DeGrippo: I love that because it's using a technical means to leverage social engineering, right, because you can, you know, make something glitch and just send a string, for example. Or you can send a message that implies I know where you are, I know what you're doing, and then you can add the psychological aspect that is key for social engineering to work which is, in this case, the psychological emotional aspect is a shame for going to a merch booth instead of going to talks like whoever came up with that that's pretty clever. I like that.

Christine Fossaceca: Yeah. And then when people were asking me, they were like should I turn off my Bluetooth and I was like, listen, I don't want to be stalked by an AirTag or a Tile or anything so I left my Bluetooth on the whole time. I normally leave my Bluetooth on for that very reason. But also, I reboot my phone pretty often like I do other things so that if there is some type of like nearby physical threat whether it's like Wi-Fi or Bluetooth like as long as you're rebooting your phone and things like that, like maybe it's a little paranoid, but it does stop potential things that are happening because you're just disrupting like the process flow.

Sherrod DeGrippo: So, when you're talking about rebooting your phone, do you mean that that interrupts persistence for like an app or a piece of malware that might be -- because obviously like with host-based malware persistence is pretty easy to achieve, a lot of the times they're able to create a lot of malware that's persistent through reboot, but it sounds like iOS devices don't have a great service for doing that.

Christine Fossaceca: iOS is a lot harder, especially in like the later iOS versions. So, in iOS 15 and later, everything has gone rootless. So, there's no root user anymore and as an exploit developer or a malware author, you would need kernel access so there's this function task for a pit zero and you need to get the kernel task, which is, you know, process ID zero, which is that task handle. And so, if you want to get the task handler, you need to be root. But now Apple has made it rootless. So, there are still jailbreaks for iOS 15 and there are public ones but it just made it a lot harder because it's kind of like shifted everything. So, persistence is not impossible but on an iPhone or on an Android device, your memory area is a lot smaller than the like RAM that you would have on a computer so things are a lot more nebulous and unstable. So, for exploits that are being developed, if you are injecting into process memory before you achieve persistence because that is like one of the known methodologies, if you inject into process memory and you don't achieve persistence before you fully execute it, whatever you need to be doing in memory, then you're not going to have persistence. So, if you're constantly rebooting your phone, then maybe you're preventing somebody's exploit from fully executing or fully achieving what they needed to achieve. And if you're rebooting your phone, you're also forcing things to restart. So, oftentimes when the operating system restarts like there's different elements of the boot chain that kind of take over and are able to correct things that are glitching. So, if you're rebooting your phone and that's correcting something that's glitching, if there's malware that's supposed to be persistent, it may not operate as intended after reboot just because it's hard to do that. So, it's definitely not impossible but that is one of the elements that makes iOS exploitation more difficult now than it was like five years ago.

Sherrod DeGrippo: Wow. I did not know any of this. I know almost nothing about. I feel great when I can get like my email to work on my mobile device. It's a wonder to me because I grew up like with no even cell phones really, right, like we didn't have that, we had landlines. So, every time my phone works, I'm amazed. Let's talk trackers. So, there are multiple kinds out there, there's Tile, there's the Apple.

Christine Fossaceca: Oh, yeah, the AirTag.

Sherrod DeGrippo: AirTags. There's a bunch of these different trackers that really are kind of a cool idea, right? It brings a lot of technology usefulness into the real physical world like you put one of these tags on your luggage, hopefully it doesn't get lost or stolen, you're able to find it. You put it on your keys. Oh, you know, the keys got left in the kitchen when normally I put them by the front door. I can find my keys with this. What is going on in terms of uses that are nefarious or abusing those devices for purposes that are malicious?

Christine Fossaceca: I just interviewed Alexis Hancock from the Electronic Frontier Foundation because after I gave a presentation on some of the privacy information that's leaked in Apple Bluetooth messages, she wanted to talk and the Electronic Frontier Foundation is getting super involved in some of the new standards organizations that are trying to improve the behavior of some of these Bluetooth trackers. So, there's Apple AirTags, there's Tiles, there are Samsung trackers. I think Google is planning to come out with a tracker. Also, Anchor, they're like the --

Sherrod DeGrippo: Oh, yeah.

Christine Fossaceca: I know they make phone chargers and cables and they actually also basically created their own AirTag that rides the Find My framework because Apple has like a partner program for people to have their devices participate in Find My, which means that when you open the Find My app on your phone, your device can also pop up without actually being attached to an AirTag. So, you can do it for whatever devices are offering it. So, like the Anchor one is literally just like the same thing as an AirTag but like third-party headphone manufacturers, as an example, would be able to also use the Find My network. And so, then you'd be able to track more than just AirPods, things like that. But a lot of the concerns with the Find My framework, in particular, is that it's possible for surreptitious tracking to happen because Apple's whole thing was that, oh, this is like anonymous and we don't know who is accessing the Find My network because we don't want your information and it's all encrypted anyway so even though we're the custodian of it for seven days like we can't read it, can't open it, don't want to, and it's 100% private and safe. And the problem with that is people can actually manufacture these Apple messages just as we saw with that DEF CON AirPlay packet, you can manufacture any of these Apple Bluetooth packets. And the way the Find My framework works is any nearby Bluetooth device that's an iPhone participates automatically in the Find My framework so it's going to receive a Bluetooth packet from a potentially rogue device and it's going to say, "Oh, let me help you out and I'll give you my GPS information," and then upload that to Apple. So, the rogue device is not actually communicating to Apple, it's just sending out a public key that is sent into the wild and then the finder devices, any iPhone out there receives that and uses that public key to then upload an encrypted location report and sends it to Apple. And then later the person who owns the rogue device can, you know, pull down those location reports from Apple. And Apple doesn't do authentication so they're intended for only iPhone users to be able to use this and to pull down location reports from Find My. But instead, you know, anybody can pull down these location reports and get the raw JSON data that has location information as long as you know what the private key is you can decrypt it.

Sherrod DeGrippo: So, I guess that leads the question to if I'm out somewhere and somebody drops an AirTag in my purse, what happens and how can I protect myself from being stalked by creepy AirTag droppers?

Christine Fossaceca: Yeah, so this University T Darmstadt has really led the way in privacy protection from AirTags and other kind of trackers. So, they created the AirGuard app for Android and they actually released it for Apple as well. So, whether you're Apple or Android, I definitely recommend downloading the AirGuard app. Previously, if you had an Android, before AirGuard came out, like there was no way for you to know really that there was this device on your person. And you might think, okay, well, if I have an Android phone that means it's not calling out to the Find My network so they won't know where I am anyway. And it's like, yeah, but if you go to a bar and everybody else in the bar has an iPhone, they know where you are. And then if you go home and your roommate has an iPhone, they know where you are. So, because remember, everybody else's device is involuntarily reporting to the Find My frameworks through what's called the SearchParty utility.

Sherrod DeGrippo: It's like a crowd-sourced reception capability for the AirTags where the AirTag is constantly broadcasting out and any iOS device is essentially participating in like you said third-party capability to upload information from that AirTag whether they're associated with it or not. So, somebody could drop one in your bag or tape one to your car and essentially follow or find you. If you get that notification up on your screen that says, "Hey, an unknown AirTag is spotted with you", what can you do?

Christine Fossaceca: So, if you have an iPhone and you get that notification from Apple, it will probably be within 30 minutes of the AirTag following you but you have to have moved one mile. So, actually, I have a lot of data from this and screenshots actually also from DEF CON because DEF CON is a hacker summer camp and there's a lot of different activities happening. So, I was actually getting a lot of Tile and AirTag alerts but they were not nefarious because I went to Black Hat and then I went to DEF CON. And so did, you know, 10,000 other people. So, I was getting these notifications and I knew they were like false alarms because it wasn't that somebody was following me from Black Hat to DEF CON, it's that we were all just, you know, simultaneously going to the same places. But because I moved a mile technically, that is, you know, something that triggered it. AirGuard allows you to scan around you. So, when I got the first notification from Apple, I also got a notification from the AirGuard app and I was like, "Oh, I should look into this." And so, I tried to scan the vicinity with the AirGuard app, you just press a button and it tells you whether or not the device is found. And like the device wasn't there anymore so I was like, "Okay. They're obviously not following me because they're not near me right now." And so, periodically throughout the week, you know, I would scan and like any device that had alerted, it was like, oh, you just happen to be in the same place that like other people are and it made sense because we were all at the same conference going to the same talks, going to the same locations. But in your everyday life, I would be a lot more suspicious. So, you should not be getting these false alarms in your everyday life. Especially like if you're just kind of moving through your normal day-to-day. So, if you do get it, you want to click on the notification. I just like always screenshot, I probably have like 1000, 2000 screenshots in my phone because I screenshot literally everything like screenshot it so that if anything happens, because it's hard to go back and look at notifications. And then if it turns out it's been nothing, then it's nothing. But what will happen is both the Apple app and AirGuard will actually have a record of all of the places that you were spotted nearby that tracking device. So, that's how you can figure out whether or not it's a false alarm. Because I was looking at all of the places we were "together", and I was like, oh, it's triggering because we were at the Mandalay Bay this morning and now we're both at Caesars Forum this afternoon, okay, this is not like someone following me, this is somebody that's just doing the same thing that I am doing. But what would have made me nervous is if they went to all of the restaurants that I went to, all of the parties I went to like if they were at every single location I had been, then that would have been like, oh, someone is following me. But I was like, "Oh, nobody is following me". So, first verify, try to verify like should you be worried or not. Don't freak yourself out if you don't want to be freaked out.

Sherrod DeGrippo: So, I think though that if you get one of those, you should probably pay attention and maybe google around and, you know, worse comes to worse, get into your car and drive a mile away and see if it goes away. And you can kind of figure out if it's actually with you or not because, you know, spending time in large public spaces, you kind of get, like you said, those false alarms that are just public patterns of crowds and group movement, and things like that.

Christine Fossaceca: Yeah. I would definitely say like if you're worried about being followed, don't go home, or don't go somewhere where like -- if you don't want them to know where you live, don't go home. Go to like a police station one mile away because hopefully, nobody will do anything to you outside of a police station. So, I recommend like go to a police station a mile away, see if you get any triggers, turn on AirGuard, try and detect it, and like let's say you're at a police station, then you find an AirTag on your car. Well, now you can go to the police and say, "Hey, here is the serial number and like the last four digits of the phone number of this AirTag that I found on my car." Now, as I talked about with Alexis from EFF, they might not do anything because they might say, "Well, there's no crime that's been committed." Like they might just like not do anything. But at least now like you've protected yourself. And so, the last step would be to disable the AirTag, which is just taking the battery out. And so, I said to some other people kind of jokingly but either bring it to the police and like disable it at the police station because then may be they'll go to track it to the police station. Or, you know, your worst enemy's house. But just kidding, we don't want to do that. But, yeah, definitely like use the AirGuard app. I felt more safe when I was like, "Oh, it's where I'm getting this notification but the device isn't around me." And, you know, as I moved to other places like at dinner, there was no Tile by me, no AirTag by me so, yeah, I definitely -- I think the AirGuard app is one of the best like free apps out there.

Sherrod DeGrippo: Well, I think we're just about out of time. So, I think we've covered a ton of stuff. But, Christine, would you be willing to come back and go over a bunch of other things with mobile that we haven't talked about?

Christine Fossaceca: Yeah, absolutely.

Sherrod DeGrippo: There were a ton of topics we didn't get to but it was so great having you on. Christine Fossaceca, thank you so much for joining us. I hope we get to have you back again to talk more about the mobile threat landscape. Christine is a senior mobile security researcher at Microsoft, her focus is iOS. And I just learned a lot more about my mobile device than I ever thought I could know. Thank you so much, Christine.

Christine Fossaceca: Of course, thank you for having me.

Sherrod DeGrippo: Thanks for listening to the "Microsoft Threat Intelligence" podcast. We'd love to hear from you, email us with your ideas at Every episode will decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out for more and subscribe on your favorite podcast app.