At a glance.
- Akira and Rapture: two new threat groups on the ransomware scene.
- How to keep your data off of ChatGPT.
Akira and Rapture: two new threat groups on the ransomware scene.
While more established ransomware gangs like Cl0p and LockBit have been grabbing headlines lately, there are two new kids on the block that are doing their fair share of damage. Avertium offers a closer look at Akira and Rapture, two lesser-known ransomware groups that have been active in the past month.
Three recent attacks have been attributed to Akira, which uses various tactics including phishing emails, bug exploitation, or brute force remote desktop protocol attacks to infiltrate its victims’ networks and usually targets video files for encryption. The gang typically hits small-to-medium-sized businesses, and ransom demands fall between $50,000 and $500,000. Akira’s most recent victim was BridgeValley Community and Technical College, a small higher ed institution in the US state of West Virginia. They’ve also attacked a UK architecture firm, a US IT services company, and a pharmaceutical company based in the EU.
Rapture is more difficult to analyze than Akira, due in part to the use of a commercial ransomware packer called Themida. The gang, which was first observed in March of this year, prefers infiltration tactics that leave behind a minimal footprint, like memory-based payload attacks and small-size infection chains. Their modus operandi has similarities to that of the Paradise and Zeppelin ransomware gangs, though researchers are confident Rapture isn’t a Zeppelin variant.
How to keep your data off of ChatGPT.
Governments across the globe are working to determine how to regulate the use of artificial intelligence-powered platforms like ChatGPT. While the impressive chatbot has exploded in popularity in recent months, experts warn that its mysterious training processes could pose a risk to anyone with data on the web. Though ChatGPT developer OpenAI has been secretive about how it trains the AI, it’s general knowledge that it uses public internet data to fuel the large language model. And OpenAI admits that ChatGPT could provide inaccurate information on a subject that could be damaging to private individuals. In response to the public’s concerns, OpenAI has created new tools that give users more control over their data, and Wired explains how to activate them. A Personal Data Removal Request form is now available in Europe and Japan that allows individuals to have their data deleted from OpenAI’s systems. However, the form appears to apply only to data that might be included in its answers to users (not its training data), and the burden is on the subject to provide evidence that the platform has mentioned them. Some experts say updates like the data removal request form are a step in the right direction, but not quite enough to truly tackle the problem. There’s also an option in ChatGPT’s settings that allows the user to delete their chat history, which OpenAI says will ensure that all data will not be used for training purposes. Daniel Leufer, a senior policy analyst at digital rights nonprofit Access Now, says “They still have done nothing to address the more complex, systemic issue of how people’s data was used to train these models, and I expect that this is not an issue that’s just going to go away, especially with the creation of the [European Data Protection Board] task force on ChatGPT.”