Google's not being ghosted from vulnerabilities.
Dave Bittner: Hello, everyone. And welcome to the CyberWire's research Saturday. I'm Dave Bittner and this is our weekly conversation with researchers and analysts tracking down the threats and vulnerabilities, solving some of the hard problems, and protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
Tal Skverer: So we as part of our work we do routine analysis of the customers' environments related to how non human identities are connected to -- not [inaudible] specifically in the Google reports [inaudible] environment. And we saw something a bit odd.
Dave Bittner: That's Tal Skverer, research team lead at Astrix Security. Today we're discussing their work GhostToken, exploiting GCP application infrastructure to create invisible unremovable Trojan app on Google accounts.
Tal Skverer: We saw an app whose name was changed from -- we've usually seen apps that have like these, you know [inaudible] names like Google Drive for work. Stuff like that. But in this case we saw an app whose name was identical to its identifier which is just like a chain of random letters. So when we dived a little more deeper into that we speculated that this happens if the developer of the app deletes the app and then the -- Google gets confused and doesn't know where to take the name from. And when we looked a little more deeper into that, we found out that when you create an app in Google it's actually contained within a project in Google Cloud platform which is the cloud infrastructure that Google provides. And then we went ahead and created our own app within some project in GCP, and when we deleted the app itself then the same thing happened. The name of the app became its unique identifier. But then we wondered what will happen if we deleted the entire project inside of -- instead of just deleting the app. And when we deleted the project actually the same thing happened. So the name of the app was [inaudible] same thing. But in GCP they have this nice feature that because a project usually contains a large amount of data that may be important they give you 30 days to regret your decision to delete the app, to delete the project [inaudible] it's important. And you can restore the project to its original state. And we noticed that when we -- when the project is deleted, it's actually not really deleted. It's like in a pending deletion state, some kind of a limbo state. And during this time all the users that installed that app which is contained within the project in GCP, they cannot see the app in their management page. So every user who stole maybe, I don't know, tens of apps, can see all the apps and the permissions that they gave them in a single page. And if the project that belonged to some app is deleted or pending deletion, then they cannot see the app. But we noticed that if we decide to reinstall the project before the 30 days have passed then all the original tokens which is basically the unique [inaudible] given by Google to the app that the app can use to access the user's data, they start working again. So the whole tech scenario was pretty clear at this point. Let's say that a victim installs an attacker's app. Immediately when the victim does that, the attacker can immediately delete the project that they associated with the app because the attacker can close this project. And once they do this, the victim cannot remove this app anymore because it -- it's removed from the management page. But the attacker at any time can reinstall the app, quickly access the victim's data, and then delete the project again so the app returns to this pending deletion state. And this can go on forever because every time you reinstall and delete the project the 30 days and they start again. They refresh.
Dave Bittner: So just so I'm clear here. I'm -- I'm a user. I download and install a Google app and it turns out that that app is malicious. And in installing that app I have to grant that app permissions to access various things in my Google environment. It could be my email, could be my files, whatever. And so what you're saying is that what you've discovered here is that the bad actor, if I go ahead and install a malicious app this mechanism allows the bad actors to basically disappear for a while and then pop back up, grab some data, and then disappear for a while. And they could do that forever and chances are I would not know.
Tal Skverer: Exactly. Yeah. That's exactly right.
Dave Bittner: And so what is going on under the hood with Google that allowed this to happen?
Tal Skverer: Okay. So I guess you can call this hindsight, but after the fact and when I prepared my talk on the upcoming Def Con, I wondered what happened to -- that this scenario even happened in the first place. And it led me to dive deep into the protocol called OAuth. It's an authorization protocol released about a decade ago that basically powers this whole interaction between users, developers who will have apps, and Google. And when I looked in this protocol I noticed that while it really gives a good outline of how the protocol is supposed to work, it ignores two very crucial things that basically allow this GhostToken vulnerability to exist in the first place. The first thing is the question of what happens when you need to register an app. So the -- the original standard for auth doesn't really deal with that. It leaves a lot of information to be decided by Google in this case. And then actually in this instance Google decided to include at their discretion within GCP which actually caused this whole, you know, project deletion, pending deletion, and reinstall scenario. And the second part that was missing from the original protocol is what happens in the user management page because if Google would have allowed, you know, more thorough audit log of all accesses done by any apps to your account, this whole vulnerability would be irrelevant because you would be able to see any access done by apps to your account. So the attacker couldn't have hidden their app from the user. So these two missing pieces was what actually -- the big reason that the vulnerability existed in the first place.
Dave Bittner: And so you reached out to Google here to let them know, and they've been responsive?
Tal Skverer: Yeah. So actually they were really responsive in the beginning. I think about two months after I submitted it then I got the famous nice catch, and they started working on the fix. But apparently it wasn't that easy to fix. Like the region of the vulnerability came from deep infrastructure of Google. It has to do with how projects work in GCP and how apps within projects work and what they display to users and how they handle tokens. And I guess this was one of the reasons that caused Google to take a lot of time before they found a good fix for it. And during this process they were quite responsive. They were talking to me and we were trying to understand best what's the good way to solve this issue.
Dave Bittner: And so ultimately what happened?
Tal Skverer: So in the end they decided to let's say fix the issue by letting the users still see apps that had been deleted. They didn't fix that issue. They didn't see an issue in the fact that a project enters an in deletion stage and you're able to restore it. This is what is intended. So they just changed it so users can see projects that -- sorry. Apps that belong to projects that are pending deletion.
Dave Bittner: Is there any evidence that anyone had been making use of this functionality?
Tal Skverer: Yeah. That's -- that's an excellent question. I've been really wanting to know if it's correct or not. If it's true or not. But it's very difficult to find out. And the reason is that even if you are let's say an administrator in your Google workspace organization then you have more extensive audit logs about accesses done by apps. But the problem is that when you try to find out in hindsight if someone exploited this vulnerability, then all the accesses from these kind of apps would look just like any other app because when the attacker reinstalls the project associated with the app and then accesses the user's data, then it looks like just another normal log of an app where there's a correct name. The name it shows is fixed when you reinstall the app. So there's basically no good way to find out if anyone abused this. I'll also add that the problem is that deleting apps happened naturally and statistically in -- I don't know. In several cases we see it in environments that would be worked on. So this makes it even harder because let's say you found an app who accesses your user data that is now deleted. You don't know if someone exploited the GhostToken vulnerability or maybe the developer of the app decided to quit and delete their app. It happens.
Dave Bittner: So what are your recommendations then for folks who think that this may be a concern? What can they do?
Tal Skverer: My first recommendation for people who are afraid of that is basically start by monitoring your -- you have a special page as a Google user where you can see all apps that have access to your account and information. So start monitoring that and remove any app that you don't trust the developer enough with your Google data. I recently found out that I -- that my TV has full access to my Google account for some reason. Yeah. So this is basically my recommendation for regular Google consumers and anyone who is an admin for Google workspace accounts I recommend making restrictions on new apps that your users can add. For instance, there is a nice policy in Google workspace that allows you to say, "Any new apps that my users are going to install themselves will be unable to access their drive or their [inaudible]." Which is -- which are the most sensitive services in Google.
Dave Bittner: Our thanks to Tal Skverer from Astrisk Security for joining us. The research is titled GhostToken, exploiting GCP application infrastructure to create invisible unremovable Trojan app on Google accounts. We'll have a link in the show notes.
Dave Bittner: The CyberWire research Saturday podcast is a production of N2K networks proudly produced in Maryland out of the startup studios of Data Tribe where they're co-building the next generation of cybersecurity teams and technologies. This episode was produced by Liz Irvin and senior producer Jennifer Eiben. Our mixer is Elliott Peltzman. Our executive editor is Peter Kilpe and I'm Dave Bittner. Thanks for listening.