Research Saturday
Recent Episodes
ChatGPT grants malicious wishes?
Bar Block, Threat Intelligence Researcher at Deep Instinct, joins Dave to discuss their work on "ChatGPT and Malware - Making Your Malicious Wishes Come True." Deep Instinct goes into depth on just how dangerous ChatGPT can be in the wrong hands as well as how artificial intelligence is better at creating malware than providing ways to detect it. Researchers go on to explain how the AI app can be used in the wrong hands saying "Examples of malicious content created by the AI tool, such as phishing messages, information stealers, and encryption software, have all been shared online."
Files stolen from a sneaky SymStealer.
Ron Masas of Imperva discusses their work, the "Google Chrome “SymStealer” Vulnerability. How to Protect Your Files from Being Stolen." By reviewing the ways the browser handles file systems, specifically searching for common vulnerabilities relating to how browsers process symlinks, the Imperva Red Team discovered that when files are dropped onto a file input, it’s handled differently. Dubbing it as CVE-2022-40764, researchers found a vulnerability that "allowed for the theft of sensitive files, such as crypto wallets and cloud provider credentials." In result, over 2.5 billion users of Google Chrome and Chromium-based browsers were affected.
New exploits are tricking Chrome.
Dor Zvi, Co-Founder and CEO from Red Access to discuss their work on "New Chrome Exploit Lets Attackers Completely Disable Browser Extensions." A recently patched exploit is tricking Chrome browsers on all popular OSs to not only give attackers visibility of their targets’ browser extensions, but also the ability to disable all of those extensions. The research states the exploit consists of a bookmarklet exploit that allows threat actors to selectively force-disable Chrome extensions using a handy graphical user interface making Chrome mistakenly identify it as a legitimate request from the Chrome Web Store.
Andy Patel from WithSecure Labs joins with Dave to discuss their study that demonstrates how GPT-3 can be misused through malicious and creative prompt engineering. The research looks at how this technology, GPT-3 and GPT-3.5, can be used to trick users into scams. GPT-3 is a user-friendly tool that employs autoregressive language to generate versatile natural language text using a small amount of input that could inevitably interest cybercriminals. The research is looking for possible malpractice from this tool, such as phishing content, social opposition, social validation, style transfer, opinion transfer, prompt creation, and fake news.
Implementing and achieving security resilience.
Wendy Nather from Cisco sits down with Dave to discuss their work on "Cracking the Code to Security Resilience: Lessons from the Latest Cisco Security Outcomes Report." The report describes what security resilience is, while also going over how companies can achieve this resilience. Wendy talks through some of the key findings based off of the report, and after surveying 4,751 active information security and privacy professionals from 26 countries, we find out some of the top priorities to achieving security resilience. From there the research goes on to explain from the findings which data-backed practices lead to the outcomes that can be implemented in cybersecurity strategies.