At a glance.
- Apple’s new child protection tech raises privacy questions.
- Passive exploitation of messaging app bugs.
- FTC warns of unemployment insurance smishing.
- SharePoint credential phishing.
Apple’s new child protection tech raises privacy questions.
Apple has announced it’s launching features aimed at preventing the distribution of Child Sexual Abuse Material (CSAM) on Apple platforms. As TechCrunch explains, the Messages app will employ on-device machine learning technology to detect if children receive sexually explicit images and warn children and parents accordingly. The feature’s goal is to help children make more informed decisions, potentially protecting them from child predators and preventing them from sharing self-generated CSAM, or “nudes.”
The tool relies on algorithms similar to the machine learning mechanisms employed for object and scene identification in Apple Photos, 9to5Mac explains. This new implementation of the tech has been praised by child protection groups, but privacy advocates are concerned about infringement of user privacy. TechCrunch describes how the technology employed, called NeuralHash, uses private set intersection to find a match without revealing the actual image, allowing Apple to detect explicit content without reading the actual messages themselves or passing the info on to Apple’s servers. However, privacy experts caution that the algorithms could lead to false positives, and such technology could be employed by governments to detect communications from political dissenters or other undesirables.
The debate highlights the friction that can arise when security (or law enforcement, or even public policy) might collide with privacy. As the Washington Post reports, the digital rights group Electronic Frontier Foundation (EFF) has expressed concern about the potential abuse of the tech. “It’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children,” EFF stated. “As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.” Johns Hopkins cryptography professor and security expert Matthew Green weighed in on Twitter, “In [Apple’s] (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you.”
Paul Bischoff, privacy advocate, Comparitech, notes that the move has been coming for several months:
“Apple hinted that it was scanning iCloud images for child abuse content some months ago, so the announcement that they're now scanning users' phones doesn't come as a surprise. Although there are privacy implications, I think this is an approach that balances individual privacy and child safety. The important thing is that this scanning technology is strictly limited in scope to protecting children and not used to scan users' phones for other photos. If authorities are searching for someone who posted a specific photo on social media, for example, Apple could conceivably scan all iPhone users' photos for that specific image.
"The hashing system allows Apple to scan a user’s device for any images matching those in a database of known child abuse materials. It can do this without actually viewing or storing the user's photos, which maintains their privacy except when a violating photo is found on the device. The hashing process takes a photo and encrypts it to create a unique string of numbers and digits, called a hash. Apple has hashed all the photos in the law enforcement child abuse database. On users' iPhones and iPads, that same hashing process is applied to photos stored on the device. If any of the resulting hashes match, then Apple knows the device contains child pornography.”
Chris Hauk, consumer privacy champion, Pixel Privacy, is ambivalent about the feature because of the privacy threats that may accompany child protection efforts:
“While I am all for clamping down on child abuse and child pornography, I do have privacy concerns about the use of the technology. A machine learning system such as this could crank out false positives, leading to unwarranted issues for innocent citizens. Such technology could be abused if placed in government hands, leading to its use to detect images containing other types of content, such as photos taken at demonstrations and other types of gatherings. This could lead to the government clamping down on users freedom of expression and used to suppress "unapproved" opinions and activism.”
Messaging app bugs allow for passive exploitation.
While a great many successful attacks hinge on the victim's being tricked into, say, clicking on a fraudulent link or downloading a malicious attachment, Wired discusses “interaction-less” vulnerabilities that allow hackers to spy on users’ messaging app conversations without any participation from the user at all. Ever since the 2019 discovery of a FaceTime vulnerability allowing hackers to access an iPhone’s microphone and camera even before the user answered a call, Natalie Silvanovich, a researcher in Google's Project Zero bug-hunting team, has been investigating these passive bugs. Her research has led to the detection of similar issues in messaging platforms Signal, Google Duo, Facebook Messenger, JioChat, and Viettel Mocha. Silvanovich explains, “A reason a lot of these bugs happened is, people who designed these systems didn’t think about the promises they were making in terms of when audio and video are actually being transmitted and verify that they were being kept.” While all of the bugs she’s found have been reported and patched, her discoveries shed light on what these platforms can do to protect against such flaws in the future.
Unemployment insurance phishing operation.
The US Federal Trade Commission (FTC) is warning of a phishing scheme that aims to steal personal information and unemployment benefits. Targets receive a text urging them to log into their unemployment insurance benefits accounts in order to modify or reactivate their claims. Victims click on a malicious link leading to a fake, but very convincing state workforce agency where they’re asked to input login credentials and personal information, which the scammers can then harvest and use to file fraudulent benefits claims or other sorts of identity theft. The FTC points out that state agencies never request personal data via text messages and suspicious messages should be reported to the National Center for Disaster Fraud.
Erich Kron, security awareness advocate at KnowBe4, observes that social engineering becomes more dangerous when people are under stress:
“As we continue to work our way through the pandemic and associated issues, unemployment insurance has become more and more important to people unable to work when jobs that match their skills are not available. With the recent rise in cases, due to the Delta variant and other factors, stress levels continue to rise for people impacted. This makes them prime candidates for attacks such as this, which threaten their only source of income.
"Phishing emails, text messages and even phone calls rely on fear and anxiety to help people make poor decisions or miss otherwise obvious signs of a scam. When people are already in a strong emotional state, it makes these attacks that much more effective.
"To counter of these attacks, organizations need to ensure their employees are trained on the techniques behind these attacks, and that they are able to quickly spot and report them. In addition, people should hover over links before clicking anything. If there is any doubt at all, they should go directly to their unemployment insurance portal to look for messages related to their claims.”
Purandar Das, co-founder and the chief security evangelist at Sotero, also notes the particular vulnerability of the targets:
“This migration of the phishing platforms to mobile phones and text messages is concerning. It is a replay and logical next attack vehicle. What is concerning about this is the potential target audience. As dangerous as phishing emails are, text messages have the potential to be much more so. The messages can reach a much large audience, especially the vulnerable segments such as the elderly and the younger people. They each have their own weaknesses related to mobile phones. A text message is capable of generating an immediate moment of panic making them provide valuable information. I believe a concerted campaign of education and awareness of critical to getting ahead of this new attack vector.”
SharePoint credential phishing continues at a high rate.
Bolster notes, as should surprise no one, that "Microsoft" continues to be commonly named in the phishbait that surrounds credential theft campaigns. The company's name often appears in the URL itself, in an attempt to hook even relatively wary users. Shashi Prakash, CTO of Bolster, commented on his company's findings:
“Microsoft is one of the most widely used brands for phishing campaigns. In the last 30 days, we have discovered more than 21,000 fake phishing sites using Microsoft products or logos. Almost 8,000 of them have 'Microsoft' in the URL to try and give the URL more perceived legitimacy. This example includes the term “secureserver[.]net” in the URL and uses a fake Excel login page. The data also shows that this URL has hosted 28 different phishing sites, and the IP address has been used for 38 other phishing sites. To counter these attacks, companies create blocklists, but that method is outdated in today’s fast moving Internet age. The best way to nullify these types of attacks is to just take them down, and we do see more companies choosing that route because it is more effective and permanent.”