Black Hat 2017 has wrapped up, and by all accounts it was another successful conference, with an active trade show floor, exciting keynotes and engaging, informative educational sessions on a variety of topics. There was business being done, with hopeful entrepreneurs and investors alike looking to identify the next big thing in cyber security. In this CyberWire special edition, we’ve rounded up a handful of presenters and one investor for a taste of Black Hat, to help give you a sense of the event.
Patrick Wardle is Chief Security Researcher at Synack, and creator of objective-see, an online site where he publishes the personal tools he’s created to help protect Mac OS computers. He’ll be telling us about his research on the FruitFly malware recently discovered on Mac OS.
Hyrum Anderson is technical director of data science at Endgame, he will discuss research he released on stage at Black Hat showing the pros and cons of using machine learning from both a defender and attacker perspective.
Zack Allen, Manager of Threat Operations, and Chaim Sanders, Security Lead, of ZeroFOX will be speaking about their Black Hat presentation on finding regressions in web application firewall (WAF) deployments.
And we’ll wrap it up with some insights from Alberto Yepez, founder and managing director of Trident Cybersecurity, on the investment environment and the changes he’s seen in the market in the last year.
Dave Bittner: [00:00:03] Black Hat 2017 is wrapped up, and by all accounts, it was another successful conference with an active trade show floor, exciting keynotes and engaging, informative educational sessions on a variety of topics. There was business being done with hopeful entrepreneurs and investors alike looking to identify the next big thing in cybersecurity. In this CyberWire Special Edition, we rounded up a handful of presenters and one investor for a taste of Black Hat to help give you a sense of the event.
Dave Bittner: [00:00:31] Patrick Wardle is chief security researcher at Synack, and he's also the creator of Objective-See, an online site where he publishes the personal tools he's created to help protect Mac OS computers. He'll be telling us about his research on the Fruitfly malware recently discovered on Mac OS. Hyrum Anderson is the technical director of data science at Endgame. He'll discuss research he released on stage at Black Hat showing the pros and cons of using machine learning from both a defender and attacker perspective. Zack Allen is manager of threat operations and Chaim Sanders is a security lead at ZeroFOX. They'll tell us about their Black Hat presentation on finding regressions in web application firewall deployments. And we'll wrap it up with some insights from Alberto Yepez, founder and managing director of Trident Cybersecurity, on the investment environment and the changes he's seen in the market in the last year. Stay with us.
Dave Bittner: [00:01:32] Time to take a moment to thank our sponsor Cylance. Are you looking for something beyond legacy security approaches? Of course you are. So you're probably interested in something that protects you at machine speed and that recognizes malware for what it is, no matter how the bad guys have tweaked the binaries or cloaked their malice in the appearance of innocence. Cylance knows malware by its DNA. Their solution scales easily, and it protects your network with minimal updates, less burden on your system resources and limited impact on your network and your users. Find out how Cylance is revolutionizing security with artificial intelligence and machine learning. It may be artificial intelligence, but it's real protection. Visit cylance.com to learn more about the next generation of anti-malware. Cylance - artificial intelligence, real threat prevention. And we thank Cylance for sponsoring our show.
Patrick Wardle: [00:02:26] So Fruitfly was discovered originally in February of this year - actually, the first Mac malware of 2017...
Dave Bittner: [00:02:33] That's Patrick Wardle.
Patrick Wardle: [00:02:35] ...Discovered by Malwarebytes. A few weeks after that initial discovery, a friend of mine gave me a hash of a variant - a new variant - variant B - that I took a closer look at that looked like it came out around the same time frame - what was discovered again in January and - or February of this year.
Dave Bittner: [00:02:51] So give us an overview. How does it work?
Patrick Wardle: [00:02:53] Yeah. So Fruitfly targets Mac users, so it's a Mac backdoor, essentially. It's a fairly feature-complete backdoor, providing a remote attacker the ability to fully control an infected computer - so standard things like file upload, process, execute, running shell commands, numerating processes - but also has some interesting capabilities. For example, it can interact with a mouse and keyboard, and the initial variant also had the ability to turn on the webcam. So it looks like the main goal of the malware was unfortunately to spy on infected victims.
Dave Bittner: [00:03:27] And is there any notion of who was being targeted?
Patrick Wardle: [00:03:31] So that was interesting. So one of the cooler aspects, I think, of my analysis was I was able to decrypt some of the backup command and control server addresses, and these were available for registration. So as part of my analysis, I had built a custom command and control server so that I could task the malware in the lab and basically have it show me what it was able to do. So the end result of that analysis was I had this custom command and control server that could fully interact and talk to the malware.
Patrick Wardle: [00:03:58] So anyways, I registered these backup domains and put up my custom command and control server. And immediately, hundreds of infected victims connected. Now, I didn't task any of those victims, but when the malware connects, it sends a host and username. And also, obviously, I have its IP. So with those three pieces of information, you can readily identify victims' full names and where they're roughly, geographically, located. And then you can hop on LinkedIn or Google and get a pretty good sense of who these people are. So in this piece of malware, this scenario, we actually were able to pretty readily identify the victims. And unfortunately, it looks like it's just everyday kind of people - families, you know, individuals - most in the U.S. and with certain interesting geographic clustering. Looked like - for example, Ohio had about 20% of the victims, so that's kind of interesting, in a way.
Dave Bittner: [00:04:49] Any idea what the infection vector is?
Patrick Wardle: [00:04:52] No, and that's a great question. I'm pretty sure certain people know. You know, I handed over my research and information to law enforcement, and I know Apple has been looking at this, as well. You know, if we look at how, traditionally, Macs get infected, it's usually through some sort of user interaction - so an email with a malicious attachment, perhaps a Trojanized or pirated application or maybe even an infected website that has a fake security pop-up. But in this case, we actually didn't see an installer. So maybe there's a difference in infection vector. That having been said, the malware wallet's feature-complete isn't incredibly sophisticated. So, you know, I would be surprised if it's using some, you know, really advanced exploit or some infection technique that perhaps doesn't use user interaction. But hopefully, we'll have an answer to that in the not-too-distant future.
Dave Bittner: [00:05:40] So would a standard antivirus detect it?
Patrick Wardle: [00:05:43] So this is one of the issues. So looking at this malware, there's some forensic clues and also some other interesting information - which unfortunately isn't public at this time - that seems to indicate that this malware has been around perhaps five or even longer years, which is a rather long time. So it's possible that this malware, you know, hasn't been detected for almost half a decade or more. And when it was originally discovered by Malwarebytes and when I started looking at the variant B, neither of those samples were detected by any of the antivirus engines on VirusTotal. So my guess was this piece of malware kind of flew under the radar. And it being a custom code - a custom piece of malware - you know, it didn't have any detections for, perhaps, an incredibly long time. So that's a little worrisome and I think kind of illustrates the fact that perhaps, let's say, antivirus on Mac has a rather long way still to go.
Dave Bittner: [00:06:37] You have some security tools on your own website. Would they have had any chance of detecting this?
Patrick Wardle: [00:06:44] Yeah, that's a great question, and the reason I design security tools is exactly for a scenario like this. So most my tools - the way I design - they look for malicious activities versus signatures of known malware. So it's likely that my tools would have detected this. For example, when I ran them against the sample, a lot of the activities of the malicious code would trigger. So for example, one tool, KnockKnock, will show you installed items. This piece of malware installs itself as a launch agent, so you would see an unsigned launch agent that probably you wouldn't recognize. OverSight, which monitors the webcam and the microphone, would have likely popped up when the malware turned on the webcam. And it was interesting because the malware had the ability to alert the attacker when the user was not active, and this was probably by design as a way to spy on users without them noticing because the webcam was turned on by the malware - the LED indicator light would go on. And if you're sitting at your computer and all of a sudden, the LED indicator light goes on, like, throw that computer out the window, right?
Dave Bittner: [00:07:45] Right.
Patrick Wardle: [00:07:47] So the attacker probably realized this and therefore, you know, built some capabilities into his malware so he could perhaps only turn on the webcam when the user was not there, with the hope - from the attacker's point of view - that maybe he would capture, you know, the victim, you know, walking around their bedroom in their underwear or, you know, worse - less. So, you know, a tool like OverSight, which can alert you of this webcam activity, I think, is incredibly powerful. So, you know, I think it's wise for users to look, perhaps, into third-party security tools, especially free ones, that are able to detect malicious activities versus, you know, looking for just static signatures because I'm sure there's other similar threats out there. And traditional antivirus products may not be detected.
Dave Bittner: [00:08:31] Yeah, I'm curious about, you know, how you registered the command and control server domains and started getting information. A couple of things come to mind about that. Did the malware - beyond sort of checking in with you, did it start trying to send you information, sending you pictures, sending you, you know, key-logged files, that sort of thing?
Patrick Wardle: [00:08:51] That's a good question. So first, the reason I was able to grab the backup domains is because they were available for registration.
Dave Bittner: [00:08:58] Right.
Patrick Wardle: [00:08:58] And the reason the malware then connected to them was because the primary command and control servers were offline - probably when Malwarebytes discovered the initial infection. I'm not sure if they worked with an ISP to shut it down. Anyways, end result - the primary command and control server was offline, so all the malware was trying to speak to the backup ones. So when I registered it, what the malware does - it just checks in and then asks for tasking. So the only thing it sends is version number of the malware, username and then the full name of the computer, which is often the user's full name. You know, I'm not going to lie. I was very tempted to task the malware, but, you know, that's a very gray area. And it's actually funny - when I handed over my information to law enforcement, that was the first question that they asked me. So I didn't interact with the victims and so I - and the malware won't send out any sensitive information until it has been tasked.
Patrick Wardle: [00:09:51] So Fruitfly is interesting because it kind of has the capabilities that match what a nation-state piece of malware would have, but the victims are the ones that are normally targeted by, you know, cybercrime malware. But it had none of the features that cybercrime malware traditionally has. So that's why I'm fairly confident that, you know, its goal was just to spy on kind of everyday users, and there's probably just maybe an individual behind this insidious malware who, you know, seems to be rather perverse. But we are also starting to see Mac ransomware. We've had a few samples this year. And that's something that unfortunately is probably to continue a trend because it's such a financially incredible opportunity, essentially, for hackers. And, you know, if Mac users are falling prey to these kind of social engineering attacks to install malware, you know, hackers are going to continue to target them. So I think there'll be more information about this coming out in the next few months. And I'm optimistic that hopefully, we'll have some closure about who did it and perhaps their, you know, motives and answer some of the questions that remain open at this time.
Hyrum Anderson: [00:10:56] So I come from a machine learning background. I have a PhD machine learning, and I love it. So what I'm about to say should not at all diminish, I think, the role of machine learning.
Dave Bittner: [00:11:06] That's Hyrum Anderson from Endgame.
Hyrum Anderson: [00:11:08] Machine learning has blind spots. It has weaknesses. If an adversary has access to your machine in their new model, in some cases, those weaknesses can be very convenient to exploit. So we came to this research kind of with an aim to help harden and improve our machine learning models before motivated and sophisticated adversaries do it for us, right? So that's kind of the framework within this. Machine learning has blind spots. We like to find them first and use knowledge about - you know, we were red teaming our own machine learning models in order to patch them and provide superior protection for our customers.
Dave Bittner: [00:11:48] So take me through some of the details of that. When you say machine learning has some blind spots, is that inherent to all machine learning? Is that just the way it works, or is it, you know, the way specific systems may be set up?
Hyrum Anderson: [00:12:00] No. In fact, only the most trivial problems don't have blind spots. I think it'd be fair to say that in all applications, all machine learning models have blind spots. A famous example of this is actually not in security, but in images, where a classifier is shown a picture of a bus. It knows that it's a bus, with 90% confidence. But then I can actually ask the model, what small pixel intensity modifications can I make to most confuse you? I can ask, directly, the model that question, and it will respond and tell me what pixels to change. The - this new image that looks, to your eye, exactly like the previous image - now, machine learning model thinks it's an ostrich, with 90% percent confidence. And that is just something that machine learning models have in common. They are imperfect representations of the world, but they are useful for things like detecting images and also detecting malware.
Dave Bittner: [00:12:57] And so to sort of extend what you were talking about there with the imaging, can you ask the systems that are being used for malware, where are your blind spots?
Hyrum Anderson: [00:13:06] Indirectly. The research that I'm presenting at Black Hat is tackling a very ambitious problem and, frankly, is not nearly as successful in finding the blind spots as, you know, if I have sort of a source code to your machine learning model. The framework for information security is hard for a number of reasons. Number one is that if I change a pixel in an image, that image is still an image. But if I change a byte in a Windows executable file, there's a chance that that breaks both the format of the file or it breaks the functionality of the malware. So those modifications are not so simply done in, you know - especially machine learning for malware detection. The second point is that, often, an adversary doesn't have the source code - you know, doesn't know specifically your model, or it might not be one of those models you can ask directly. So in our setup, we are taking the most general approach. There's a black box. You can throw, arbitrarily, a sample at it and get an answer - malicious or benign. That's it. Then we pit an artificial intelligent agent, reinforcement learning agent against that black box to play a competitive game where the agent tries to learn, to discover what small modifications it can make that preserve the PE file format and preserve the malware's functionality, but still bypass the machine learning model.
Dave Bittner: [00:14:38] And so which of those two AIs has the harder job?
Hyrum Anderson: [00:14:42] Oh, by far - in our setup, by far, it is the job of the attacker that's harder. The attacker has almost no knowledge about what it's attacking, and its success rates are very, very slim. So in the image case, those kind of attacks where the attacker knows everything and its images, its sort of easy manipulations, I can bypass those models 90%, 95% of the time or more. In information security, in this most general setting for malware where I know nothing about the model, the bypass rate is more like 5% to 10%. And it's a very hard problem for the agent to learn. But in security, 5%'s kind of a big deal.
Dave Bittner: [00:15:22] Yeah. Yeah, it absolutely is. And so as the defensive AI is having - is sort of being hammered, being pounded against by the attacking AI, how is the defensive AI doing? Is it adapting, as well?
Hyrum Anderson: [00:15:40] So during the attack, that AI is not adapting. But what we do to harden our models is that we play out this series of games where the attacker becomes, you know, relatively good at his job where - relatively is, you know, trying to get 5%, 10%. When the game has played out, this artificially intelligent agent, this reinforcement learning agent can actually take a malware sample and know what modifications to make to it to bypass the defense, right? Let's freeze now. The game's over. I'm going to use this agent to generate a bunch of malware samples that are going to bypass it. And then I'm going to fold that experience, those new malware samples back into the defense, and he's going to learn how to patch his own holes. And then when you play the game again, the defense becomes much stronger.
Dave Bittner: [00:16:32] And so then at that point, is it just sort of an iterative process where you just go round and round and round until you've got a really strong defensive system there?
Hyrum Anderson: [00:16:40] That is the hope, and that's the approach.
Dave Bittner: [00:16:42] From a practical point of view, are you seeing many attackers actually using artificial intelligence and machine learning?
Hyrum Anderson: [00:16:49] You know, I'm not a fantile (ph) guy. I don't think that - from what I have seen - that attackers are using this sophistication level. But certainly, they are going to know about it. Those that are especially sophisticated and motivated are going to know that this is possible. And part of the point of our research is to get ahead of the game. We're going to be releasing code that will allow friends and competitors alike to leverage this gameplay to strengthen and harden their own machine learning models based on these attacks.
Dave Bittner: [00:17:20] So take me through - what are some of the key takeaways from this research?
Hyrum Anderson: [00:17:25] I guess the number one is that machine learning is a useful tool for generalizing to never-before-seen malware. In our games, predominantly, the defense wins here. But as I said before, 5% is a big deal, and we'd like to patch those. So number two is that machine learning has those weaknesses. It has blind spots. It can hallucinate. And we would like to provide a consistent and realistic method for finding those blind spots. And number three, we'd like to open this up to the security community to help us improve, and all boats rise with the tide. So we'd like to release that and have researchers and collaborators help us to strengthen our machine learning defenses.
Zack Allen: [00:18:09] We originally did not work with each other.
Dave Bittner: [00:18:12] That's Zack Allen. He and Chaim Sanders are both from ZeroFOX.
Zack Allen: [00:18:16] I was at a company before this, and this company was deploying and building a web application firewall. And Chaim, being the kind of engineer that he is - he reached out to us and asked us how we were doing 'cause he works on web application firewalls a lot. And he asked us, you know, how are you verifying how secure it is? How are you testing it? - and things like that. And I pretty much said, that's a really, really good question. So because Chaim works on ModSecurity as one of the core developers, we started talking about a way to quantitatively measure how effective a web application firewall could be. So after a couple weeks of discussion, we just kind of met up and put our heads together and said, you know, I think this is something that everybody needs, especially the plight of security engineers. When they go to places like Black Hat, they're presented with sales material. They're presented with, you know, Forrester Waves and Gartner Magic Quadrants. And they look great for managers, but when a security engineer then goes and gets the handbook on how this thing runs, they realize it kind of sucks. So what we wanted to do is level the playing field and give people a chance to measure the effectiveness in terms of logging, in terms of stopping attacks, in terms of configuration against any type of web application firewall. And that's where framework for testing WAFs, or FTW, came to be.
Dave Bittner: [00:19:39] And so Chaim, what's your side of this story? How did this collaboration come to be, from your point of view?
Chaim Sanders: [00:19:45] Yeah, it was pretty interesting. I was toying around with this idea of how I can effectively test ModSecurity in conjunction with the Core Rule Set, which are both two Open Source projects. And I was talking with Zack, and he essentially said, oh, well, we have the same problem, but maybe we can do it in such a way that it can scale more effectively. It can be more helpful for everyone involved. And I said, well, golly, Zack, that would be swell. And from there, we kind of just ran with it. We talked about design and architecture - how we would need to make it in order to scale up from just a single test platform to one that can test many different platforms effectively.
Dave Bittner: [00:20:28] Now, at the time, you all were working for separate companies. Was there any pushback from the higher-ups on this sort of collaboration?
Chaim Sanders: [00:20:36] Surprisingly, there was very limited pushback on my side. I was working primarily on Open Source projects at the time, and I think, Zack, you had an environment where they kind of fostered Open Source a little bit at the time.
Zack Allen: [00:20:50] Fastly was really good when it came to Open Source work, and they still are. And when you have a chance to contribute to, like, the greater security community, they also foster that. So it was a win for Fastly because we got to get regression testing from a security standpoint on our WAF, but it was also good for the community. And it's just one of those weird timing and luck things where everyone benefited and no one really got upset, which is kind of weird in this day and age.
Dave Bittner: [00:21:18] So help us understand here. What are we talking about when we're talking about regression testing?
Chaim Sanders: [00:21:23] Yeah, that's a great question. Essentially, the idea is that we need to take some sort of baseline, at minimum, of attacks that are well-known and well-understood and determine whether or not an application or a web application firewall will ever be subject to allowing one of those attacks to bypass its protections that it offers. So to put this another way, we want to make sure that once we install a protection in a web application firewall that it's always working. Now, when you deploy this in an environment, that's really helpful, but it's also really helpful when you're the developer of the web application firewall. So it kind of has a two-pronged benefit - one where you can make sure that new rules and new additions to your firewall don't break things and one way you can make sure that it's actually working exactly how you expected it to.
Zack Allen: [00:22:11] Yeah. A good analogy for this is - let's just say you're adding something new to your car. You want to make sure it still drives after you put on new tires or you place a spark plug after that. And especially when it comes to software, which is definitely more complicated than most cars, even bolting on something as small as a new spark plug can just make this piece of software fall apart, so to speak. So regression testing just makes sure that anything - when it comes to a feature or a new attack or a new rule is added - that the car is still moving forward; it turns left when I turn the steering wheel left or anything along those lines.
Dave Bittner: [00:22:48] I see. So it's sort of - like doctors say, first, do no harm. It gives you a way to make sure that you haven't inadvertently messed up the firewall's baseline functionality.
Chaim Sanders: [00:22:58] Exactly. And one of the benefits of doing that is once you've established that there's some baseline functionality present, you can then start comparing that to other web application firewalls and determining whether or not that baseline exists, still. So it may not be able to test every single feature or it may not cover that currently, but it covers the core base web application attacks that we see day in and day out - attacks on HCVP, SQL injection, cross-site scripting. And we're always adding new tests to kind of make sure that these are up-to-date and thorough. So currently, we have thousands of tests that we run.
Dave Bittner: [00:23:33] And so, you know, we're in a constant arms race with the bad guys. Can they get a hold of this test and then, you know, just figure out ways to get around it?
Chaim Sanders: [00:23:40] Absolutely. In fact, that's encouraged because in our industry, there's kind of this notion that unless this is publicly disclosed and people are publicly kind of brought to the stake about a situation, that they're not going to fix it. And one of the main goals here is to make sure that everyone, at minimum, has this baseline for web application firewalls. You can sure add to it. In fact, we encourage you to. But we assume that developers of web application firewalls will want to know when there's a bypass. They won't want to hide that. They'll want to fix it as quickly as possible.
Zack Allen: [00:24:14] Raising the cost of the bottom line does surprisingly well when it comes to cyberdefense, so if we can raise that cost and get any web application firewall to at least adhere to this baseline, then I think everyone benefits.
Dave Bittner: [00:24:27] It's surprising to me that no one has done this before. Was this a novel effort, or had there been other attempts at this that maybe hadn't gained traction?
Chaim Sanders: [00:24:35] Well, so we don't necessarily know of attempts that are generalized. There are certainly attempts from each individual producer - and most of them are behind the scenes, I would assume - to test and provide some sort of functionality regression capability on a given web application firewall. But as far as a large-scale effort that can compare multiple different vendors, this is pretty difficult to do, and we were actually in an interesting spot do it. Vendors inherently have some sort of bias towards themselves. And kind of as an open source project, we don't necessarily have that initial bias that might exist in that respect.
Dave Bittner: [00:25:16] And what's the feedback been from the web application firewall vendors?
Zack Allen: [00:25:20] So at least from Fastly's sake, it's used within their pipeline every day to test for regressions. The tool's actually being presented at OWASP AppSec U.S. this year, and one portion of the talk is someone from Fastly who's still there talking about how it's in use. So that's been a pretty positive experience on their part, and I know - we've actually - were working on this last night. Part of our presentation for Black Hat Arsenal is how this is now being integrated fully into the ModSecurity development pipeline and how we can strategize about ways people can make a change to ModSecurity or the Core Rule Set, but they would also have to submit a corresponding FTW test before the merge button is pressed.
Chaim Sanders: [00:26:04] In terms of other vendors, we're still working to get some traction. There are some OWASP-type groups that have been kind of key in doing qualitative assessments in the past, and they're still working on kind of pushing quantitative aspects of their evaluation methodologies. And as a result, we're going to try and piggyback on their work, as they have close ties with many more vendors than we do. But in our initial tests, it's looking pretty good.
Dave Bittner: [00:26:32] So if someone wants to find out more, if they want to perhaps contribute, what's the best way for them to find out more about the project?
Zack Allen: [00:26:39] Sure. So if you just go on GitHub and type in framework for testing WAFs, the repo should be up there. The organization is CRS-support, so github.com/crs-support/ftw. They can also find it through Python's pipeline repository, pip install FTW, and they can go and get documentation on that.
Chaim Sanders: [00:27:04] In addition to that, we'll be having a couple blog posts come out on the Core Rule Set blog discussing how to write tests and how to implement them within the OWASP Core Rule Set project. And I think that should kind of lead to a little bit more understanding and traction beyond just the docs.
Alberto Yepez: [00:27:30] Well, it's exciting times in cybersecurity, as you know. We continue to see all these breaches and compromises of business data around the world and governments. The investment area is very, very active right now.
Dave Bittner: [00:27:46] That's Alberto Yepez from Trident Cybersecurity. Our executive editor Peter Kilpe caught up with him right off the show floor.
Alberto Yepez: [00:27:53] We are tracking about 2,500 companies of all sorts of stages - about 450 from Israel alone. And - but the biggest trends that we see is - you know, we all talk about the shortage of cybersecurity professionals and allies. We talk about the cost of integration as being something that the customers have to bear, and there's way too many point solutions. So we're focusing and looking for solutions that are really pre-integrated, easy to consume, easy to deploy, preferably delivered by either cloud or through MSSP - through a managed security service provider - because the middle market becomes a tremendous opportunity for innovation because while they need the same defenses and the same type of sophisticated tools the large banks and critical infrastructure companies have, they have the same needs and they actually, because of regulations, have the same reporting requirements.
Alberto Yepez: [00:28:48] So in the trends, we see a move to cloud-based, native-based security companies and entrepreneurs, but also thinking more of not just the tip of the iceberg, the large customer with thousands of cybersecurity professionals, but - I'm a retail customer with maybe 10, 15 people in IT, and one of them has the part-time responsibility for security. So I think that's - so how do we automate? We bring automation and simplification so that the shortage of professionals can be not only achieved by training, but more importantly, by bringing automated tools and tools that can help corporations understand the trend and eventually defense.
Peter Kilpe: [00:29:33] Do you see a lot of startups and other companies aiming toward that bigger market versus the middle market?
Alberto Yepez: [00:29:39] You know, traditionally, if you see - the new innovations in the market has always focused on the large companies. Why? Because they're early adopters. They're willing to work with you, help you with a feature set. But eventually, when it gets to a point where there's a mature solution, you want to bring it down-market. And so yes, we see that the majority of early companies try to focus on that market. And as you bring maturity - talk about SIM, bring analytics and all that - I think some of the maturity goes to address middle-market solutions. A good example is AlienVault - we were just talking a little while ago - where it's a company that is really seeking to provide visibility for the average person that is probably the only person responsible for security in a small-medium business. But it gives them all the tools that, once they've identified something - what do I do? - there's a prescription. There's a way to share intelligence straight through the Open Threat Exchange and things like that. So I think those type of companies are beginning to innovate more on, how do I bring it on automation for the middle market, the small-medium business? But most companies, like in deception, in IoT security, really begin addressing the large companies - GE, the - all the large banks first - and then eventually bring it down-market.
Peter Kilpe: [00:31:00] We hear a lot about artificial intelligence and machine learning in various contexts. How do you see that playing into your investment strategy - sort of, you know, kinds of trends that you're seeing?
Alberto Yepez: [00:31:10] I would say all our companies have a component of machine learning and AI. You know, this is something that's been around for 20, 30 years, right? Now I guess it's more possible because compute power, storage, cloud, you know, network speed - you're - now you're able to aggregate and really process a lot of data and then, you know, try to bring algorithms that are either learned by a machine or really using some, you know, predefined, predetermined mutations of viruses that you could actually predict. So I would say, without exception, every single one of our portfolio companies are using a component of big data analysis and fusion and applying different degrees of automation with the help of machine learning or artificial intelligence.
Peter Kilpe: [00:31:58] Last time we spoke - it was this time last year, really - you talked about the investors becoming a little bit more finicky in terms of the companies that they go after, the opportunities that they want to fund. Is that still the case? How is investing changing?
Alberto Yepez: [00:32:14] It's become harder because, you know, maybe last year, where probably maybe 1,500 companies were tracking - now there are 2,500. So every company sounds the same. And so it's - trying to tease the signal from the noise, trying to understand who has the core IP - intellectual property - that could actually be differentiated becomes harder and harder. So I would say that is - venture investment has become a specialized sport. What do I mean by that? In the past, you had very large funds that were generalized funds that would invest in ad tech, clean tech, cybersecurity, enterprise technology. So we are probably one of the first funds that has become an - exclusively investors in cybersecurity. In January, we closed a $300 million fund - one of the largest funds in the industry - that exclusively invests in cybersecurity companies. By that - so now we're - I guess we continue to be looking for a high barrier of entry where whatever problem is being solved is something that is grounded by key requirements.
Alberto Yepez: [00:33:20] Innovation in cybersecurity comes primarily by trying to solve people's problems. It has to be grounded. So very little innovation has happened in the lab in cybersecurity - arguably PKI or some things that, you know, happened in the early days. But having grounded requirements, working with the chief information security officers that can tell you what problems you're trying to solve when there's no commercial solution - so when we become investors, we try to get very keenly acute of the problems that companies are trying to resolve. We have a large group of advisors - many of them chief information security officers and serial entrepreneurs - that help us to define whether this company has the right intellectual property in addressing a real problem. Then the next issue - you have the large market opportunity, the intellectual property. The thing is, how do you scale? How do you bring the broader market? That's where we come in. So once we see the large market opportunity and the high IP, we invest - hopefully, we reference customers, bring you partners, bring you executives, bring your board members, channel partners and eventually, help you scale to, you know, address the problems of many. And so yes, I think the problem has become more acute because the noise is higher. There's more companies. But the fundamentals haven't changed.
Alberto Yepez: [00:34:35] By and large, what we're seeing is an industry that is maturing in certain aspects in bringing automation, but all the new innovation is coming about that the threat vector, the criminals, the adversaries are not staying still. They're looking for vulnerabilities and really trying to exploit them. New platforms are emerging, as we know, and you know, now we have the connected car. We hear talks about containers, cloud-native applications. So there's - the surface continues to increase, and oftentimes, we get attacked with legacy. And so we cannot forget about legacy. But nonetheless, also, we need to figure out how we cover the new threat vectors. So I think, as an industry, we'll continue to thrive. I don't think we will see a solution in our lifetimes. It's something that will continue to evolve, and it's very exciting, thriving. You look at the Black Hat this week. I think it's one of the largest crowds that we've seen, and it continues to amaze me the level of international participation in corporations as well as very capable individuals who can show you how the trend is actually real and that companies need to pay attention to it.
Peter Kilpe: [00:35:48] Is there a lot of competition for the deals themselves from a company like yours?
Alberto Yepez: [00:35:53] There is, actually. It's interesting because the entrepreneurs have a choice. So I've been a serial entrepreneur. I don't know - you - when the last time we talked, I ran three different cyber companies. And so money's equal. When you go look at an investor, it's, what value are they going to bring in in addition to the - if they offer to fund your company. I think what most entrepreneurs look for and what they're telling us is, I want somebody that can partner in the long term. The real investors and real partners show their true colors when the going goes tough because nothing goes on a straight line. And, you know, if you're an adept entrepreneur, you move with the market. And you need to move fast. This way, you're a private company. And so the competition is pretty fierce. Most of the new funds come in and compete on price. But the serial entrepreneurs understand the value that is brought in by the different groups that are, I would say, the leading investors in cybersecurity. There's a lot of competition. But I think you will stand out by just seeing how you can shorten time to value - and the risk, the execution.
Peter Kilpe: [00:37:02] Do you have any advice for those companies that are seeking investment? Obviously, you just gave a big piece of advice, you know, right there. Look for somebody who can bring value. But what do they need to do to stand out to an investment company? How do they know they're ready for investment like yours?
Alberto Yepez: [00:37:19] You know, we can - you know, imagine you're pitching me a deal. We can get really excited. But if there's not a customer out there that tells you - that validates whatever we think it is, it's - don't even come to talk to any investor. It has to be grounded on requirements. And you have to have at least a reference or a couple of reference customers that really spend the time and help you refine the problem you're trying to solve, define the features you need to have and, hopefully, test at least the prototype or the initial version. Eventually, when and if you have a generally available product, they can become your biggest champion, your biggest reference accounts when somebody else calls and say, does this company, say, do what they say to do? - and says, absolutely. So the best time to come in looking for money is when you have external validation other than the entrepreneur team and people that can help you refine and define what segment of the market you're going to play in. Why are you different than many of the others? That's what we really focus on is differentiation but validation.
Peter Kilpe: [00:38:29] Is there anything that I haven't asked you that you'd like to share with our audience?
Alberto Yepez: [00:38:34] Well, you know, I'm really happy with the success of CyberWire. And I would like to encourage all my portfolio companies to continue to work the way you do because you do a great service to the community by keeping people informed in very short snippets of time, where - I pick up CyberWire every morning and find out what's going on in the industry. I encourage people to use it.
Peter Kilpe: [00:38:53] That's great. Well, thank you very much. We appreciate you listening in and reading us every day.
Alberto Yepez: [00:38:57] Thank you.
Peter Kilpe: [00:38:57] Thanks for talking to us.
Dave Bittner: [00:38:59] OK, so that last bit may have been a bit self-serving. But who are we to disagree? Our thanks to Patrick Wardle, Hyrum Anderson, Zack Allen, Chaim Sanders and Alberto Yepez for taking the time to speak with us.
Dave Bittner: [00:39:11] And thanks to Cylance for sponsoring this Special Edition. To learn more about all the ways Cylance can help protect what's valuable to you, visit cylance.com.
Dave Bittner: [00:39:20] The CyberWire podcast is produced by Pratt Street Media. Our editor is John Petrik, social media editor is Jennifer Eiben, technical editor is Chris Russell, executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening.
Copyright © 2020 CyberWire, Inc. All rights reserved. Transcripts are created by the CyberWire Editorial staff. Accuracy may vary. Transcripts can be updated or revised in the future. The authoritative record of this program is the audio record.
Cylance is revolutionizing cybersecurity with products and services that proactively prevent, rather than reactively detect the execution of advanced persistent threats and malware. Learn more at cylance.com