AI offers criminals too much to remain on the sidelines for long.
Artificial intelligence and the near future of cybercrime.
Tuesday’s Public Sector Ignite conference, organized by Palo Alto Networks, was full of interesting panels and stories. The event began with a panel on “Building Resilience: Strategies for Securing the Future of Public Sector Cybersecurity.” The panel discussed many topics, including collaboration between the government and the private sector businesses as well as sharing information between different nations.
Palo Alto’s Wendi Whitmore predicts generative AI enabled social engineering phishing attacks.
When asked about the impact of AI on the cybersecurity sector, Wendi Whitmore, Senior Vice President of Unit 42, explained that the real challenge AI brings is lowering the barriers to entry for cyber criminals: they’re able to use AI in developing sophisticated malware and tools that would otherwise be beyond their capabilities. “What I am most concerned about in the immediate future is what I think is going to be a much lower barrier to entry and an absolute acceleration of the human side of these attacks, so the social engineering pieces.” Whitmore said. AI allows lower skilled threat actors to create tools in significantly less time. It also can automate many of the tasks required to pull off a cyber heist. Whitmore added that AI could allow threat actors who don’t don’t speak their targets’ language to create grammatically and syntactially correct emails and approaches in phishing or vishing schemes. The AI-enabled impostures can extend to emulation of employees’ conversational tone. Attacks generated by AI haven’t yet come to fruition, as Whitmore noted that her team is not yet responding to these types of attacks, but it’s likely that they’re not far off.