
Secret Herbs, Spices and Hacking Copilot Studio
Nic Fillingham: Since 2005, Blue Hat has been where the security research community and Microsoft come together as peers.
Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.
Nic Fillingham: On the Blue Hat podcast, join me, Nick Fillingham.
Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders, and industry leaders, both inside and outside of Microsoft.
Nic Fillingham: Working to secure the planet's technology and create a safer world for all.
Wendy Zenone: And now on with the Blue Hat podcast. [ Music ]
Nic Fillingham: Welcome to the blue hat podcast to Scott Gorlick. Scott, thank you for joining us.
Scott Gorlick: Thank you very much Nic. It's great to be here.
Nic Fillingham: Who are you? What do you do here?
Scott Gorlick: So again, my name's Scott Gorlick. I'm the security architect for Power Platform. What do I do? I basically have the job of making sure that all of the things that you can do with Power Platform are governable in some way. And also, a lot of other fun stuff that is not related to doing that work that kind of ties into the social contracts around how application development works and what kind of consent people should have, and you know, how you can act on their data.
Nic Fillingham: Awesome. Well today we're gonna talk about some of that stuff. We're also gonna talk about security research in Copilot Studio. So, there is a video training up on YouTube that you were the presenter of from just really a few weeks ago from when we're recording this podcast episode, where you talked about Copilot studio, obviously, what is it? And the sort of ask and the go do out to the security research community. So, we're going to talk about that and that'll be a great conversation. And I -- what I love about what I think's gonna happen in this episode is that we're gonna be giving some very tangible guidance and ideas and dangling little, hopefully interesting nuggets that researchers can glum onto and maybe go and pull some threads and see what happens, and that's always great to give that to the research community. But we also want to share a bit about our guests and a bit more about you, Scott. So, could you share a little bit about your role here at Microsoft, and maybe sort of what you've done at Microsoft, and how you go into the security space, and maybe anything that you feel could be relevant to the security research audience?
Scott Gorlick: Absolutely. Yeah. You know, so my background, I've been in Microsoft about eight years now and Microsoft was my first security job. Before this I was a software engineer and architect, you know, e-commerce platforms, home banking platforms, that kind of work. Before that, I had been a product manager, QA engineer, hardware test engineer, basically, just trying to keep the family fed as, you know, the early 2000s caused us to, you know, have to move between jobs, and companies up and left. And you know, how I stumbled into tech is kind of interesting, because it was, you know, technology is just a thing that you use, not a job per se. And you know, I use technology when I was managing a restaurant; I used to manage KFC's, used to drive a big rig, used to sell real estate. Basically, if you name it, I've probably done it at some point in my life. As a barista, at some time -- you know, at one point. How I think about problems is where technology applies, but like, what I do and what I apply it to is different. And so, I think that's very tangible and pertinent, like where we are, in you know, the move AI, the move agents, and how some of the paradigms are shifting. We may have to rethink, you know, what invariants, you know, we've looked at and how they no longer apply, and different ones do apply and possibly then, you know, in some cases, where moving away from code can even create more security or more possibility for security and governance I should say.
Nic Fillingham: You gave me a little bit of a teaser before we started recording that you, your career path into security and into Microsoft included managing a KFC restaurant. You just mentioned that, and driving trucks. Eventually I'd love to learn how those experiences, those life experiences and career, forks in the road, you know, are still with you today in the way that you sort of apply your work in AI. But let's back up a little bit. So, you have a non-traditional path in the security. You were managing KFC restaurants. Do we start there? Do you want to get back a bit further?
Scott Gorlick: Yeah.
Nic Fillingham: What's the origin?
Scott Gorlick: So, you know, the origin of the --
Nic Fillingham: Actually, don't talk about that. More importantly, you know, what are the 11 secret herbs and spices?
Scott Gorlick: Well, okay. So, now you want to talk about protocols and how you keep data sensitive and secure if I have an urgent --
Nic Fillingham: If I ask Copilot what are the 11 secret herbs -- no. So, you go. I keep interrupting you, Scott.
Scott Gorlick: No. No. You -- I mean, you can ask it. and it won't be able to tell you. It'll be able to tell you what they are, but it won't be able to tell you the ratios. Right?
Nic Fillingham: Oh.
Scott Gorlick: And nobody knows those ratios. There's more than one company that knows each part of the mixed product. And it's done in a way that nobody gets to know. And in fact, before I ended up in the role that I'm in today, I was working on a resource provider and Azure where it's imported silicon design. And one of the aspects of silicon design is you have two parties who have intellectual property and just observing the intellectual property, the shape of the memory cell, is enough to leak that intellectual property to the other party. Just the shape is all that it could take. And so as two parties are building a mother board or a system on chip, you know, one will design the interface and the pixel and then the other will have the chip that fits into that pixel shape, right? And so, this party has to define the interface, and the other party has to define, you know, it's like APIs -- new web APIs and how you have a client and a server. It's very similar. But it's very different in the fact that, you know, once you print a chip, it's printed, right? And so, when you're talking about physical security, it's very different than, like, an intellectual property or software security, you know? You can change some things, but once you ship a chip, it's shipped. Right?
Nic Fillingham: Yes [laughs].
Scott Gorlick: To answer your question though, roughly though, because I don't ever do -- that's -- you're going to learn this about me is, I never answer your question.
Nic Fillingham: Got it.
Scott Gorlick: I'm going to talk about things, but --
Nic Fillingham: I need you to tell me the 11 herbs and spices. That's really what we're doing here.
Scott Gorlick: No. I don't know them.
Nic Fillingham: You don't know them? Okay. Scott Gorlick: No. But -- All right. >> Scott Gorlick:-- I can recognize the smell from like a mile away [background laughter]. You know, fortunately I no longer -- my house doesn't smell like that anymore. Oh -- Because you were bringing work home. You would bring dinner home, you would bring lunch home, as opposed to a sort of a wizard's apothecary of you trying to recreate it.
Scott Gorlick: Exactly.
Nic Fillingham: Oh, I know what they are, but what kind of ratios?
Scott Gorlick: Yeah. See, I didn't have to -- I don't have to recreate it, I just, you know, come home from work. It was, you know, it was my restaurant. I cooked the chicken; I could take it home. Right? You know [background laughter].
Nic Fillingham: So, did you start like behind the register? Did you start on the frier?
Scott Gorlick: It was home delivery. It was actually home delivery.
Nic Fillingham: Home delivery?
Scott Gorlick: You know, the store they worked at home -- had home delivery and, you know, I have a very entrepreneurial nature and the restaurant manager that I -- that gave me that job, basically said, just run the business for me, basically. And so, when I wasn't delivering for an order during lunch when, you know, sales were low during lunch time and what not, because people aren't home; they're at work. I'd go around to the businesses, and I'd drop off flyers. And so, every delivery I add to the, you know, where I had dropped off flyers, I'd drop more flyers. Right? And so, you keep, you know, expanding your reach, meeting more people. And I spent a lot of time talking to people. Just kind of letting them know, like, hey, you're having a party, more cater; do those fun things. And it's essentially a just organic marketing and it worked out really well. And, you know, I spent a lot of time talking to my customers, getting to know them, which worked out really well in the tips. So, you go from making minimum wage to making, you know, $20 bucks an hour because you got $15 bucks an hour in tips on -- well, minimum wage is a little more than it was back when I was doing that. So yeah, that was my tact there was to grow the business; to do those other things, and you know, some of those strategies they held through with kind of hours adapting to the, as I mentioned, like, early 2000s and companies lay you off, what do you do now? How do you, you know, bridge the gap between two jobs? I think there is a lot of that shapes how I work today, and how I think about things. And again, like learning about how you keep the two different companies who each, you know, mix the spices, from knowing the whole recipe. To some degree that's applied and what we did in the Silicon Design Workbench is, how do you have these two parties bring their food together and make the meal without ever either side knowing the whole recipe. So, at this point, had you had any experience or training or education in the technology or security field? Or at this point in your art and story tech, it's a thing you use; it's nothing else. I have a high school education, and I started working on a Commodore 64 when I was about eight years old writing basic, you know, dos five [laughs], old times.
Nic Fillingham: Nice.
Scott Gorlick: Again, I never thought about technology and software as a career. I thought about it as a tool.
Nic Fillingham: Right. And so, at what point do you go into, not just the technology field, but do you start to think of security in a broader sort of concept and then something that might actually be more of a career focus?
Scott Gorlick: To be quite honest, I was told.
Nic Fillingham: You were told?
Scott Gorlick: I never thought that they were separate.
Nic Fillingham: Ah Ha.
Scott Gorlick: I didn't realize as a software engineer, it wasn't my job to do security.
Nic Fillingham: That seems to be the flip of a lot of people, you know --
Scott Gorlick: Yeah.
Nic Fillingham: -- or maybe a lot of people, but there's certainly a trend that you hear that, you know, may -- security has to be sort of reiterated or sort of re-sort of educated to folks that, you know, if you are writing software, you are -- you're a security engineer in some capacity. So, you're saying that was with you from the beginning? Right. If you look at Microsoft and our zero trust architecture, and how we think about verifying explicitly everywhere, you know, my personal background and not trusting anyone has led me to just very naturally move into zero trust as a technology pattern because that's what I've been doing all along. And I think one of the important takeaways from a zero trust, as it pertains to, like, software quality and developing a system that will stay up, take security totally aside, quality and security have this huge overlap between integrity and availability. Right? If you're not available, you can't have quality. If your data is changing in a way that you don't know what to expect, you don't have a quality system. You have principal lead surprises violated, early astonishment, whatever we want to call it today. And, you know, if you as a user are getting different data from every question, you don't trust that. Trust is fundamental, right? And verifying all of your inputs, verifying all of your outputs, are correct and verifying all of your integrations are correct. It's required to basically build a secure and, you know, a quality-centric system. And then you tie that in with things like distributed tracing. You know, you need to know if the ship's gonna fly into that mountain, you know, you need to know if somebody is using your system in a way that you don't expect. So, this distributed tracing, along with integrity and availability, there's huge overlaps between quality and security. That's something I push on a lot, and my ability to talk to software engineers as a software engineer, and talk about cyclomatic complexity and talk about the solid principles and how you need to think about this being encapsulated and all of the trust that you're putting in that encapsulation. You know, those concepts really resonate with the people that I talk to. I don't want to end this conversation or this particular thread here because it's fascinating. And so, I'm wondering, what's the best way for -- should we hard-shift into talking about Copilot Studio or how do we sort of ease our way into that conversation? Which is one of the things we're going to cover on today's episode?
Scott Gorlick: The connection point between what we're discussing and how it applies to AI's and LLMS in general is the predictability or lack thereof in some of the models that are out there, and some of the ways that you can use them. So, the biggest issue with -- I'll cite Alex Trebek, and his question that he never asked to Watson, which is, you know, Watson gets the Daily Double, Watson says, you know, bets some absurd amount, and Alex starts to ask, "Why did you -- never mind. I don't want to know." Right? So, you know, why did you choose that number is a question that you could ask to an LLM, but the answer that it gives you back might not make sense to you as a human.
Nic Fillingham: I need to back up a little bit. So, you were talking about, so Alex Trebek, Jeopardy, there was some -- there was a famous episode where IBM's Watson, AI, was a contestant.
Scott Gorlick: Right.
Nic Fillingham: Against --
Scott Gorlick: Right.
Nic Fillingham: -- humans? Is that right?
Scott Gorlick: Yeah. Yeah.
Nic Fillingham: Against, like, Ken Jennings or something like that?
Scott Gorlick: Yep.
Nic Fillingham: Got it. And so, you're saying Alex never asked Watson "Why did you pick that amount?" But you --
Scott Gorlick: He started to ask the question.
Nic Fillingham: Oh!
Scott Gorlick: And then he cut himself off and just -- he's, "Never mind; I don't want to know." Basically.
Nic Fillingham: How interesting. But you're saying, you do want to know.
Scott Gorlick: Well, I would take it in a different perspective --
Nic Fillingham: I mean, I would want to know. That sounds fascinating.
Scott Gorlick: -- and I would say that I don't care to know, but I need to be prepared for any absurd question and answer and handle those. Right? So, the unpredictability of the AIRLLM needs to be tamed by post-processing, pre-processing, before you move to the next step. And basically, these concepts are consistent with software. Right? If you make a call to a distributed system it might not respond in a way that you expect. It might error. It might give you an unexpected response. There's no difference between that integration and integration within AIRLLM. I think about AI's and LLMs, as a pertains to Copilot Studio. The steps that you're interfacing with the AI and LLM treat it like it's a giant hash table. How is it hashed? How does that work? Doesn't matter. I put a hash in. I -- you know, I put in a key. I get back a value. Is it what I expect? And I still need to verify the output. Right? You know what I'm getting, which will be input to my next step.
Nic Fillingham: All right. At this point, I do want to pause, and I want to ask, I reckon the majority of listeners probably have pieced together by now what Copilot Studio is sort of, but let's just, let's make sure that we're not too far ahead of it. What is Copilot studio?
Scott Gorlick: Copilot Studio is a new development experience where you can assemble actions, do dynamic training, create software without writing code that is inclusive of AI and LLMs. And then assemble those things and publish them as agents or other steps in the chain.
Nic Fillingham: Got it. So, is it using Copilot to create software? Or is it using Copilot to specifically create sort of Copilot derivative experiences and solutions like an agent or some other set of sort of task automation or something else?
Scott Gorlick: If you are a customer of mine and you want to build something like Copilot, you would use Copilot Studio to build your agent. And --
Nic Fillingham: Got it.
Scott Gorlick: -- you know, as a matter of fact, I am going to use Copilot Studio to help me handle all of the ad hoc questions that I get as a security architect in a group with a five digit number of engineers. And when I build that agent, I'm going to need to do the defensive coding over the how the agent interacts, ensure that I have authentication handled on the front end and validation of my outputs, and, you know, making sure that I'm not giving crypto advice about how to roll your own crypto [background chuckles]. Because that would be a terrible day. You know, so, yeah, that's kind of the big picture. Copilot Studio is not Copilot. This is where it gets a bit, you know, Aura Boris, this snake eats the tail. You can use Copilot in Copilot Studio to help you build a different agent.
Nic Fillingham: Right.
Scott Gorlick: Oh, yes.
Nic Fillingham: Okay.
Scott Gorlick: Think of Copilot studio as Visual Studio without a C sharp compiler.
Nic Fillingham: That's another great, sort of, pivot on what it is. And now with respect to sort of this episode, the Blue Hat podcast; that sort of Blue Hat security researcher audience, is -- and you did a recent training on security research and security Copilot Studio -- is the sort of action here, the call to the research community, hey, we want you to use Copilot Studio as a research tool to go and research things, or we want you to research as in go and try and break Copilot Studio so that we can make it safer. Or is it both? Or is it something else?
Scott Gorlick: Yes. And also, the intent of the -- so the talk that I did is not your traditional security talk. It's a bit of a mix between marketing and then some learning and then a bunch of security words kind of sprinkled in here and there. So, putting it together, the talk outlines the abstract concepts that apply to Copilot studio that would help you file a report and have, you know, MSRC understand that report clearly and efficiently communicate these things back to my product team for us to handle. Right? So, one of the biggest challenges a lot of security researchers struggle with, you know, I file my report Nick, it's, you know, not an issue. A lot of times those can be misunderstandings. Some people get frustrated, and they'll fail to continue to push, and we lose out on a good report, just based on a failure to have common language. Right? So, part of the intent of the talk, and again, I took a 200-slide marketing deck and trimmed it down to about 50, and then talked over marketing concepts, and the outside in, like, what is a Copilot Studio? Like, what are the objects in it? So, that when you are doing your research against Copilot Studio, and you're going to follow your bugs in the, you know, power platform bounty. Right? not the AI bounty; we have multiple bounty programs. So, in the AI bounty you're attacking the actual LLM. But if you find an issue in Copilot studio and how it integrates with flow, how it integrates with data first, how it integrates with itself, or anything of that nature, you know, that's power platform. Right? From the outside in, Microsoft is a very complex place. You see the word Copilot, but for you to understand what's actually happening inside, like I don't understand what's happening inside, because things change as we're growing and evolving the system. Like, the business will do what the business does to, like, optimize the development of the system. But from an outside-in perspective, the features are consistent because, you know, our customers depend on this consistency because, you know, they're building their enterprise applications and their go-to-market applications on top of these systems.
Nic Fillingham: Got it. So, the training, which we will link to in the show notes, and hopefully, you know, however you're consuming this podcast, there'll be a link that goes back to that training, is guidance and detail for security researchers that says to them, hey, if you're gonna go research against Copilot Studio, if you're gonna go try to break Copilot Studio with the intent of finding something that you then submit to MSRC. Here is some guidance; here are some tips and tricks, etcetera, for how to understand what it is that you've found and then also submit it in the way that is gonna make sense to MSRC so that it can be, sort of -- it can be perceived, it can be understood, it can be triaged, and then hopefully, sort of fixed, and then maybe even, you know, get a bounty whether it's on the AI side or on the dynamic side. Did I get that right?
Scott Gorlick: Yep.
Nic Fillingham: Awesome.
Scott Gorlick: I want to relate two stories here that kinda tie into that. You know, having common diction and being able to communicate effectively is critical. I -- prior role, I worked with security investigators, and I was, you know, working in that group and we were building a system, and I was hoping to convert, and did convert some people from being security investigators to being software engineers. And you know, basically, they had the knowledge of security investigation. They did not necessarily have, you know, the diction for how to handle a deadlock. Right? And communicating how a deadlock occurs. And so, you know, what I found there was to communicate effectively with people who didn't have that diction, you know, I had to show them what this looked like. Right? So, that was the incentive to get up on stage and -- or, virtual stage and talk about security research and CoBot Studio was so that, you know, when you're talking about an agent step versus an HTTP action, versus a connector action or a flow invocation, the bridge between CoPilot Studio and flow, or that connection, that's where the bug is. If you say, hey, I found a bug in this LLM, you know, we'll, look at the LLM and say, hey, we're hardened against that, let's say. And it might take a couple of back-and-forths to then figure out, like, oh, no, you're not talking about the actual LLM step, You're talking about the step where it goes and connects to something. And I just want to streamline that and save, you know, save our researchers' time. You know, effective and fast communication is critical for any sort of vulnerability; we need to squash these things quickly. The other story there is Eric Donker, who is our top bug bounty finder in the Power Platform Bounty program. And, you know, we hired him about -- it was about September last year, somehow I convinced him to come work for us [laughs]. Which was fun. His specialty, and what I observed by receiving his reports, was that he was talking to us in dynamics concepts, you know, the Dynamics 365, which, you know, is built on top of the Power Platform. The way that he communicated his vulnerabilities, I could just hand straight to the engineering team. I didn't have to do a translation. And he even had architectural, like, hey if you fix this, it'll fix the problem. Right? It was because he came from being a dynamics integrator. You know, so basically, the experience of using the product is what made him so good at researching against the product.
Nic Fillingham: This is, you know, in part also about sort of efficiency.
Scott Gorlick: Exactly.
Nic Fillingham: Well, it's all about efficiency [laughs].
Scott Gorlick: A hundred percent. I want to save researcher time, I want to save my time, I want to save MSRC time, like, you know, there's a big machine and, you know, we have a lot of important things to do. So, every second that we can say this great.
Nic Fillingham: So, part of that is then, okay. So, that's -- I know some people that are already considering or already in the process of researching against CoPilot Studio. I wanted to ask you sort of two more questions that are related. I think the first one is, for researchers out there that are looking for their next challenge; looking for the next target, should they consider researching against Copilot Studio? and if so, you know, is there a specific sort of skillset or focus area that you would sort of highlight? What should they go and try doing? Like where is the green or sort of yellowish field for them to maybe want to apply their lens?
Scott Gorlick: That's an incredibly challenging question to answer because in order to do research I think the most important skill for you to have is attentiveness. Right? And comfort with what you're working against will increase your attentiveness. You're gonna notice those subtle issues. Curiosity is also an incredibly powerful researcher skill. Right? That looks weird; chase the threat. You know, one of the things that I am working with Erika on is how do you increase the efficiency of this hunch to actually deliverable finding? And, you know, part of that process goes down the path of, like, taking some assertion that you have a problem and then proving it. Shout out to early 2000s zines, POC or GTFO, you know, if [laughs]-- have you got your Bible there? No? Okay.
Nic Fillingham: I do. I do actually, it's --
Scott Gorlick: I'll pull out my Bible somewhere. I've got it right there, actually.
Nic Fillingham: Yeah. I got it. Yeah.
Scott Gorlick: The point there is like, if you have a suspicion, that's great. How long it takes you to get to proof is -- that's what makes you a better researcher and for you, not for us. Right? You're gonna be able to find more and that'll affect you more than it affects us. So, I think that, you know, us being the recipient of the reports. But, you know, we also benefit too because now you can find more reports. So, the understanding of communication protocols, I think, is the most critical researcher skill. You don't necessarily have to understand everything about crypto to break crypto if somebody's leaking keys. Right? You know, you're sending your password in plain text or the XKCD. I don't need to brute force your password, I just need to brute force your fingers. Right? [Background chuckles]. So, you know, how you go about solving problems doesn't have to be the most complex solution. And, you know, there's a lot of interest in doing the very low level C, C++ assembly exploits. And you're finding, like, problems in silicon. I think that's incredibly powerful research and important to do, but where we are in the world today, you know, we've given up a lot of, you know, I don't run a mail server anymore. I use Exchange, let's say. Right? So, you know, we've given up a lot of trust to companies that are providing services for us, and I'll say that that's appropriate. Right? Because it is. We have people who are looking at the low-level code and they can focus on that from the outside in, researching how these components get assembled. How they communicate with each other. Right? Because as, you know, as I mentioned earlier, the org comes and goes and it changes around. So, every time we have these changes, you know, you lose a little bit of knowledge. You lose a little bit of detail around that integration that existed. And so, you know, it's death by a thousand cuts. Like, when that integration becomes risky, like, nobody ever thought about using this integration that way. You know, you ever reorder something, somebody comes along and says, oh, let me use this integration; it looks like exactly what I need. So, from the outside in, read the feature release notes. Look at what features are changing. Right? Because, you know, something that's existed is usually stable. But every time you add something new; that's a wonderful area to go poking and see, how do these two components connect? And then thinking about, like, the different ways that those two systems are fundamentally -- what are the fundamental objects in those systems, and how do they relate to other things is -- I'm having a hard time, like, characterizing how I look at that, because it's just something I see now and I don't know how I learned how to see it. I think I learned how to see it through just integrating systems, my background as a software engineer, and just having things fail and fail until they succeed. You know, being the guy that, you know, when Auth wasn't working, hey Scott, can you look at Auth? That's how I learned about Authentication and authorization, and you know, how to do it right or not to do it right. And how to do it in a way that kind of works until you can do it right. Yeah. It's just observation. Just be very observant. Pay attention. Never stop digging down to the metal, but also, like, be broad. Understand how things can act.
Nic Fillingham: This is fantastic advice, I think for anything, really, not just sort of CoPilot Studio, but for any way to really sort of attack or begin a journey into research. Understanding how, you know, thinking of Erik Donker, understanding deeply how the product, how the service, how the technology works. Reading publicly published documentation and then actually using the product and the service and just tinkering, and seeing, when I click this, what happens? And when I join these two things together, what's actually going on behind the scenes? And then from there, you know, natural curiosity. You know, if you have that -- if your brain works that way, you'll, you know, as you say, you may start to see the connections or see where, oh, that's a bit unclear. And, you know, digging into that. Or that doesn't work the way that I thought it would work. I think the other thing I was -- I wanted to -- with that question there, I'm going to take one more stab at it, is just your background. You ran a KFC restaurant. You drove trucks. You did a bunch of stuff that we would consider, sort of, a non-traditional path into security. And so, I wonder -- we talked about curiosity, and you talked about people who want to know how systems work. Would you have any other, sort of guidance, or any other sort of tips and tricks to people listening to this podcast that maybe have dabbled in a little bit of research, but haven't thought of trying to go and attack, break, find you know, issues with something like CoPilot Studio? Is there -- and it's totally fine if you're like, nope [chuckles]. We're exhaustedly asking that question. But I'm just wondering, like, yeah. Who's listening to this podcast right now that you want to give them a little extra sort of poke to say, I think you should go and try to hack this thing.
Scott Gorlick: If you're a researcher and you've done a bunch of O Auth abuse, come look at the O OS low-code/no-code top 10 issues. Think about how they apply to CoPilot Studio, to Power Automate, to Connectors, to Dataverse, to dynamics. You know, and then look at how do I translate this one that I found over here in Office to -- or M365 over to Power Platform? You know, there's another researcher, his name's Brad, he did research in M365; did some in Power Platform, went back to doing M365, and I'm kind of like, "Hey, come look over at this stuff." You know, I talk to researchers, and you know, also like, hey, you want to talk to me? Hit me up on LinkedIn. You know, send me a connect request and give me a -- an actual message and not just a hey, here's a connect request, and I'll be happy to talk with you. You know, I'm -- obviously, I'm not going to give away secrets, but I can give away -- and on that topic, like, back to the 11 herbs and spices, I would argue that if everybody in the world knew the 11 herbs and spices, that KFC would still be a popular place to eat fried chicken, because they can just do it at scale, and --
Nic Fillingham: Oh yeah. Yeah.
Scott Gorlick: -- and well, and quickly. And I'm not going to go spin up a pressure fryer in my kitchen, and hopefully not have the lid pop off through the ceiling. Right? And so, you know, my personal beliefs -- I'm not big on secrets; I'm very transparent. As much as I can be, sometimes, you know, like, I basically have to check myself, am I giving away too much?
Nic Fillingham: Oh yeah? Yeah? Prove it! What's your social security number, Scott? [Background laughter] Sorry. As you were saying?
Scott Gorlick: See, that's the thing, to trust me; I'm lying. So, you know, how do -- like, the other thing too, you know, if you think you have found a problem, prove it is always the way to think, you know, to actually demonstrate that you have a problem. And this is just something that I do that actually is very helpful in building security into the product, is I don't have to argue with the product managers about, like, hey, this works as designed. It's like, I really don't care how it was designed. It shouldn't behave this way. There shouldn't be this side channel. There shouldn't be this other outcome that we didn't expect. And so, you know, prove it mindset is super important. You know, so again, if you're researching currently somewhere else, come look at our stuff. You know, CoPilot Studio, Power Platform in general. Tell us where we have invariance that look funky, and we need to secure. You know, because I guarantee you there's something there, because nobody is perfect. And then if you're not so sure, like you're just now starting to hack the box journey or something like that. You're a software engineer and you're interested in security, jump off the ledge, right? Just go do the work, right? Like, go file a report; find a bug. And as soon as you do that, you're going to break the imposter syndrome and you're going to realize that I don't have to be able to do the abstract linear algebra, or any of the other cryptographic analysis to breech this system. Oh, there's a token and it's a -- you know, it's in just the communication stream.
Nic Fillingham: Thank you Scott, and if I could add one more thing, there's quite a lucrative bounty in this space too. So, I've just got -- I've got the NSLC Bounty page up here, we can link that in the show notes, but it looks like the Microsoft AI bounty program and the Microsoft Dynamics 365 and Power Platform Bounty Program, both are awarding bounties up to $30,000 US dollars. So, you know, it's quite a lucrative space, and you know, so if nothing else, go and test your curiosity and see if you can break stuff. And you know, not only would you be making stuff safer for everybody, but it could be a nice little pay out, you know, as well. Not to bring it back to money, but you know, there are -- you know, a lot of people out there are professional and semi-professional vulnerability.
Scott Gorlick: Fuel is fuel.
Nic Fillingham: Absolutely. All right. I asked this up front, and we didn't get to it yet, but I wanted to ask about using CoPilot Studio as a tool to conduct security research. Is that possible? Have you tried to do that? Has Erik Donker and other security researchers been able to do that successfully? I mean, I'm going to assume that CoPilot Studio probably wasn't built primarily as a security research tool, but maybe I'm wrong. But is that another part of this conversation here, Scott?
Scott Gorlick: CoPilot Studio is basically an application development environment. So, if you wanted to use it for research, I imagine you could. And, you know, let's say I find that dynamic chaining plus a swagger file gives me the ability to just throw up an agent and say, hey, process this swagger. Go hit every endpoint and, you know, permute all the parameters until you find a thing. I'm sure you could do that. There are teams, in deed, doing research with LLMs and various other fun things to try to find bugs faster, you know, it's -- we have a lot of developers writing code. The faster we can find bugs, the faster we have less bugs, the better the code is, etcetera. I personally am not doing that. My team is more centered on the security architecture of the platform, kind of acting as a shim between what is our infrastructure security teams and such. Because, you know, we have a platform and you know, rock solid InfraSec folks working there. But then layers seven and up. Right? That's really where my team focuses. So, we're inside the runtime receiving the web requests, thinking about how you have the security there and doing the reviews with the engineering teams. My team also focuses strongly on training our security champs. Our biggest initiative coming up for, you know, our next semester -- you know, we work on the semester cadence and what not -- so, our next -- our biggest initiative for the upcoming semester is training the software engineers within the service teams how to do their own threat auto reviews. How to file their own vulnerability reports. How to write their own code QL quarries. How to analyze, you know, different vulnerability reports and do variant analysis. So, basically teaching the software engineering community more about what security research looks like. And so, this is the flip. This is me coming to the security research community and talking a little bit about the software engineering side of the house.
Nic Fillingham: So, it sounds like CoPilot Studio, if we just look at it now as a tool, could certainly be an interesting tool in the toolbox of the security researcher. It could be used to do some level of automation you sort mentioned about. Sort of throwing a Swagger file at it and seeing what it can do. I also wondered, as you were speaking there, it's like okay, so this might be helpful for researchers out there where English isn't their first language. Is the LLMs behind the scenes that are a part of CoPilot, are they at the point now where, you know, let's say I -- English is not my primary language, and I'm researching and I've found something, but I don't have that vocabulary yet. I'm don't really -- I'm not entirely sure how the most -- what the most efficient way is to actually describe this thing in order for it to be submitted to MSRC. Could I be using CoPilot or CoPilot Studio as a way to help me better flesh out, understand, and describe in detail and also accuracy what it is that I want to then submit to an MSRC? Maybe that's not a -- I don't know if it's a CoPilot Studio question, or more of a CoPilot question?
Scott Gorlick: It's a good question; it'll help me answer the question that you've been asking that I've been not able to answer up until now, which is --
Nic Fillingham: What are the 11 secret herbs and spices Scott? Sorry. No.
Scott Gorlick: Exactly. So, the translation of your native language to English obviously makes -- you know, read the English report, make sure it makes sense, have some editor look at it of some kind, you know, before you publish. Or, you know, file and we'll give it some feedback. But that's a CoPilot activity.
Nic Fillingham: Okay.
Scott Gorlick: I would go to CoPilot or some -- you know, or Chat GPT, or something of that nature. So, going back, like, I don't know the answer if the LLMs, if the models, if it's Deep Seek. If it's Llama, if it's -- you know, whichever. Which of those models are best at doing the translation? Have fun. Try it. Prove it. Right? Going back to the BFC or GTFO. You know, so you can use CoPilot as a tool to help you translate your report. Right? From your native language to something that's, you know, easily communicated with MSRC, and then you could use CoPilot Studio to help you build the flow between I upload my, you know, not English text to something, or I pass it to the agent and then it comes out translated, right? So, that should help draw a nice boundary around, like, what's the difference between a -- the LLM, a CoPilot, and then CoPilot Studio.
Nic Fillingham: That's a great separation and distinction there. And I also wonder if just, you know, getting around and playing with CoPilot Studio and seeing if it can automate, or you can create some of those sort of flows in the way that you do conduct your normal research would just be a way of getting to know the studio, getting to know the environment, getting to understand some of the language and vocabulary, which then may, in turn, spur some ideas and some curiosity, which could either lead to more research or actually, sort of breaking CoPilot Studio in a way that could be submitted and be helpful to you guys.
Scott Gorlick: So, if you watch the training that I did, you know, CoPilot Studio, Power Platform is a second language from English. Right? The concepts that we produce, you know, aren't -- the words mean something, but they might not mean what you think they mean until you go look at the, you know, the actual material, right? So, that's there at the first stage to give you the concepts and the words, and then you can use all these other tools, you know, to help translate the rest of the report and do more of the production of bounty reports that you file. You know, maybe you even automate the filing. Work with MSRC to create an automated ingestion by applying for your reports [laughs].
Nic Fillingham: Oh, that's interesting. Scott, we are coming up to time here. I wanted to just quickly, you know, sort of a final question here about CoPilot Studio. So, I'm a security researcher. I want to play with CoPilot Studio to learn a bit more about it, and maybe even start to conduct some research to -- against it. How do I get access to it? Do I have to pay? Or is there, like, a developer or an insider, or like, how do I get access to it so that I can begin this journey? What are my options?
Scott Gorlick: So, they're linked in the slide deck and what not form the -- that I did the last time, the YouTube talk that's up there now. But essentially, we do have demos that you can do. I think you can get a 30 day demo. And what you'll end up with is an Azure 10 and all the other fun stuff. It's very similar to the existing Azure demos or various other demos that you might have taken. If you are very interested and have some skin in the game, and you're willing to party with us a little bit, like, we've worked with Erik, in fact, before he joined to help do something. I don't know that we're going to be able to do more of those types of programs, but if you have ideas, man, I'm happy to help figure out more. But the baseline is going to be get a demo, you know, use the tool, do the work, pay attention to all the URLs. Pay attention -- you know, turn on your network traces. Turn on your product MA and see what it's going and exploit it. Right? So, there are demos, but a lot of times licensing is a factor that can block some of the features. You know, when those things come up, it's -- I don't have a good solution for that, unfortunately. But I'm always happy to take input and see how we can do more.
Nic Fillingham: All right. Well, we will put your LinkedIn in the show notes as well. So, I guess if researchers out there have questions or ideas, I'll let them follow up with you directly. Scott, before we let you go, what's next? What are you working on right now? Or, what's coming up? Or you know, what's got you jazzed? Apart from security research of CoPilot Studio that you'd like to leave folks with?
Scott Gorlick: I am currently waiting with my finger on the F5 button, trying to find me a 50/90 or something of that ilk so I can run some local models and do some image processing of the cameras on my property, so I can figure out where all the squirrels are.
Nic Fillingham: Hang on. That's a graphics card?
Scott Gorlick: Yeah.
Nic Fillingham: Oh. Got it. Okay.
Scott Gorlick: The Arc. The new video card that nobody will be able to get for a while.
Nic Fillingham: And you want that to identify squirrels? Sorry. I think I missed the beginning of that. You -- what?
Scott Gorlick: So, the cameras that I have around my house. Right?
Nic Fillingham: Right.
Scott Gorlick: I'm just pumping the video through some kind of image recognition model --
Nic Fillingham: Ah. Got it!
Scott Gorlick: -- I figure out what is where, and then you know, name all the squirrels that come to my property, or whatever.
Nic Fillingham: That sounds like a great use for a 50/90.
Scott Gorlick: That's -- yeah. Either that, or I -- you know, I recently picked up a camera and I've been starting to take pictures of the sky at night, so that's a lot of fun. I like to get out and hang out in the wilderness, because all this technology's fantastic. But man, here in the west coast of America, it is beautiful, and it is rugged, and it is just a lot of fun.
Nic Fillingham: I think this might be a great place to leave it. Scott, where can folks contact or find you, a platform? We've mentioned LinkedIn; we'll put that link in the show notes. Anywhere else for folks to reach out and find you?
Scott Gorlick: That's it. I'm not a social media kind of guy. It's just not what I do. I am on Mastodon if you can find me [laughs].
Nic Fillingham: If we can find you. Well, that might be a challenge for our listeners, see if they can find you. Hey Scott, thank you so much for being on the Blue Hat podcast. I hope our listeners will take you up on the training and then giving this a shot and seeing if they can find some fun stuff in CoPilot Studio. Thanks for being with us.
Scott Gorlick: I really appreciate the opportunity. >> Wendy Zenone:[Music] Thank you for joining us for the Blue Hat podcast.
Nic Fillingham: If you have feedback, topic requests, or questions about this episode --
Wendy Zenone: Please e-mail us at bluehat@microsoft.com, or message us on Twitter at msftbluehat.
Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry --
Wendy Zenone: By visiting bluehatpodcast.com, or wherever you get your favorite podcasts. [ Music ]