At a glance.
- Winter Vivern exploits a mail service 0-day.
- Teaching AI to misbehave.
- CISO challenges, across sectors.
- Ransomware's effect on healthcare downtime.
- Two reports on the state of cybersecurity in the financial services sector.
- Possible connections between Hamas and Quds Force.
- Ukrainian cyber authorities report a rise in privateering Smokeloader attacks.
- Russian hacktivist auxiliaries strike Czech targets.
Winter Vivern exploits a mail service 0-day.
ESET warns that the Winter Vivern threat actor has been exploiting a cross-site-scripting zero-day vulnerability (CVE-2023-5631) in the Roundcube Webmail server since October 11th, 2023. RoundCube released patches for the flaw on October 16th. Winter Vivern used the flaw to conduct cyberespionage operations against European government entities and a think tank.
The researchers don’t attribute Winter Vivern to any particular nation-state, but they note that it may be tied to the Belarus-aligned threat actor MoustachedBouncer. ESET concludes, “Despite the low sophistication of the group’s toolset, it is a threat to governments in Europe because of its persistence, very regular running of phishing campaigns, and because a significant number of internet-facing applications are not regularly updated although they are known to contain vulnerabilities.”
Teaching AI to misbehave.
Researchers at IBM X-Force Red outline ways in which legitimate generative AI tools like ChatGPT can be tricked into creating malicious output like phishing email templates: “With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes....It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models. And the AI-generated phish was so convincing that it nearly beat the one crafted by experienced social engineers, but the fact that it’s even that on par, is an important development.”
The researchers tested the AI generated phishing lure against a template crafted by humans, and found that the human-made template was slightly more effective at deceiving recipients. For more on the experiment, see CyberWire Pro.