The Zero Trust security model asserts that organizations should not trust anything within its perimeters and instead must inspect every traffic and verify anything connecting to its systems before granting access. While Zero Trust is generating a lot of buzz in the cyber world, it’s often hard to determine the implications of this security model. In this episode of CyberWire-X, guests will discuss the origins of the model, cut through the hype, and discuss what you really need to know to design, implement, and monitor an effective Zero Trust approach. John Kindervag of ON2IT Cybersecurity, also known as the "Creator of Zero Trust," shares his insights with the CyberWire's Rick Howard, and Tom Clavel of sponsor ExtraHop joins Kapil Raina from their partner CrowdStrike to offer their thoughts to the CyberWire's Dave Bittner.
Good security gets out of the way of users while getting in the way of adversaries. Passwords fail on both accounts. Users feel the pain of adhering to complex password policies. Adversaries simply copy, break, or brute-force their way in. Why, then, have we spent decades with passwords as the primary factor for authentication? From the very first theft of clear text passwords to the very latest bypass of a second-factor, time and again improvements in defenses are met with improved attacks. The industry needs to trust passwordless authentication. What holds us back from getting rid of passwords? Trust. In this episode of CyberWire-X, guests will discuss a framework of technical controls to ensure only trusted sessions authenticate, regardless of faults or failures in any one factor. We will share a path forward for increasing trust in passwordless authentication. Nikk Gilbert of CISO of Cherokee Nation Businesses and retired CSO Gary McAlum share their insights with Rick Howard, and Advisory CISO of Duo Security at Cisco Wolfgang Goerlich from sponsor Duo Security offers his thoughts with Dave Bittner.
Proliferation of data continues to outstrip our ability to manage and secure data. The gap is growing and alarming,especially given the explosion of non-traditional smart devices generating, storing, and sharing information. As edge computing grows, more devices are generating and transmitting data than there are human beings walking the planet. High-speed generation of data is here to stay. Are we equipped as people, as organizations, and as a global community to handle all this information? Current evidence suggests not. The International Data Corporation (IDC) predicted in its study, Data Age 2025, that enterprises will need to rely on machine learning, automation and machine-to-machine technologies to stay ahead of the information tsunami, while efficiently determining and iterating on high-value data from the source in order to drive sound business decisions. That sounds reasonable, but many well-known names in the industry are trying - and failing - to solve this problem. The struggle lies in the pivot from “big data,” to “fast data,” the ability to extract meaningful, actionable intelligence from a sea of information, and do it quickly. Most of the solutions available are either prohibitively expensive, not scalable, or both. In this episode of CyberWire-X, guests will discuss present and future threats posed by an unmanageable data avalanche, as well as emerging technologies that may lead public and private sector efforts through the developing crisis. Don Welch of Penn State University and Steve Winterfeld of Akamai share their insights with Rick Howard, and Egon Rinderer from sponsor Tanium offers his thoughts with Dave Bittner.
The SolarWinds Orion SUNBURST exploit forced organizations to determine whether and to what extent they’d been compromised. It’s not enough to eject the intruders and their malware from the networks. Affected organizations also need to know what systems and data had been breached, and for how long. The adversary behind SUNBURST is advanced, quietly breaching the perimeter and moving freely to access, steal, or destroy business-critical data, and to disrupt operations. Joining us to share their expertise on the subject are Ryan Olson of Palo Alto Networks' Unit 42, Bill Yurek of Inspired Hacking Solutions, and we close out the show with Matt Cauthorn, from our sponsor ExtraHop, who joins CyberWire-X to discuss the challenges of detecting such advanced threats, and to share insights from behavioral analysis on what the new breed of threat actor is doing inside our networks.
For 20 years, the cybersecurity practitioner’s goto move when confronted with a new risk or compliance requirement has been to install a technical tool somewhere in the security stack to cover it. Over time, the number of tools that the infosec team has to manage has slowly grown. With the advent of bring-your-own device to the workplace, CIOs choosing SaaS applications to do work that has been traditionally handled in the data center, and organizations rushing to deploy their services into hybrid cloud environments, the number of individual data islands where company material information is routinely stored and must be covered by the security stack has increased. The complexity of this situation is immense. Two strategies have emerged to address this problem. The first is to continue down the path of installing more technical tools in each data island to cover the risk and having the infosec team manually process the telemetry of all the security devices with bigger teams and helper-automation-tools like SOAR platforms and SIEM databases. The second strategy is to choose a security vendor's platform that performs most of the security tasks on all the data islands but now makes the organization reliant on a single point of failure. Joining Rick Howard from the CyberWire's Hash Table's group of experts to consider the matter are Mike Higgins from Haven Health and Greg Notch from the National Hockey League, and later in the show, Rick speaks with Lior Div of Cybereason, who gives their point of view on this debate.