One of the more interesting breakout sessions at the Billington CyberSecurity Summit dealt with the "Latest Threats in Cybersecurity: Ransomware, Blockchain, the Insider Threat, Mobile, etc." The participants engaged in a vigorous, often contrarian exchange with their audience.
Eric Green (Security Strategist, Cyber adAPT) chaired the session. The panelists were Andy Zembower (Vice President, Cyber Security Products, Cyber and Electronic Warfare Systems, General Dynamics Mission Systems), Steven Grossman (Vice President, Program Management, Bay Dynamics), Neal Ziring (Technical Director, Information Assurance Directorate, National Security Agency), and Robert Bigman (President, 2BSecure). The panelists each described something they found particularly troubling, and then went on to address common errors and misconceptions.
NSA's Ziring expressed a concern many at the Summit shared: he's worried about data integrity attacks. The IoT is particularly worrisome in this regard because of the potential to pivot from there to other places in the network. He also sees the rise of malware that's short-lived in memory malware as troublesome. (Some call this "ghost malware," a usage Ziring finds regrettable and unhelpful.) This sort of malware is proving difficult to detect.
Bay Dynamics' Grossman saw the continued dropping of already low barriers to entry as a disturbing trend in the criminal underground.
2BSecure's Bigman pointed to Cerber as an example of industrialized malware that changes its signatures too rapidly to permit successful detection by legacy signature-based systems. And he sees another disturbing trend—the MedSec research into St. Jude Medical devices he characterized as a harbinger of malware designed "for the purpose of ruining your stock price."
Zembower noted the pervasiveness of connectivity and trends toward more bring-your-own-device workplaces as posing their own risks. He thinks it clear that many people abandon good work habits when they go home.
The panel took questions and digressed into some lively observations on the current scene. Bigman said that there's no evidence that training helps security at all. So why, he asked, do we bother? You should train and certify your admins, he thought—that's a different matter. But users? There's no evidence training helps. "Poor Evelyn in HR, opening resumes? There's nothing much you can do for her" in the way of training.
(Ziring interjected, speaking of admins, that people reading emails on their admin accounts was a prime example of a Bad Thing.)
Some in the audience wanted confirmation that a hard line should be taken toward users. Why not, one asked, simply fire the person who clicks on the malicious link or attachment? Pour encourager les autres? Bigman and Ziring dismissed this approach. As Bigman put it, you can't fire the person who clicks—"it's the job of 40 to 50% of the people in your company to click." They have to receive documents, open resumes, and so on. You need to design security that enables that, and not think you can punish your way out of security problems. Ziring agreed that the "essence of the solution isn't to fire your best people," but to keep them out of trouble, and to recognize that you'll never reduce human error to zero. Bigman pointed out that not all attacks—consider SQL injection, for example—require user error to succeed.
Grossman wanted the audience to understand and work to overcome a cultural issue. "CISOs," Grossman said, "need to be able to speak the language of risk." In Ziring's version this advice became, "help the mission owner understand the issues in terms of mission risk." If you can do that, they'll be more receptive to deploying countermeasures that reduce mission risk. They'll be more responsive to mission issues than they will be to compliance with security regulations they'll inevitably perceive as arbitrary or inapplicable.
So, in sum, don't pick on Evelyn. She's got a job to do, and you're not helping if you make that job impossible.