At a glance.
- Pareto Phone collapses after data breach.
- US Supreme Court considers government involvement in social media content moderation.
- Sensitive Irish National Police data found unsecured on the web.
- New guidance from CISA helps SMBs protect the supply chain.
- Nonprofit awarded millions to support cybersecurity education.
- European Commission ad campaign promotes proposed content scanning laws.
- New findings underscore chronic abuse of Section 702 data.
- Proposed CFPB banking rules have privacy and security implications.
- FCC proposes a return to net neutrality.
- US states sue Meta for alleged abuses targeting children.
Pareto Phone collapses after data breach.
On the heels of a data breach exposing charity donor information, Australian telemarketing firm Pareto Phone has shut its doors. ABC reports that last Thursday Pareto staffers were informed that as of Monday they were no longer employed, and the company’s website, phone number, and Google listing have been deactivated. Less than two months ago, Pareto Phone disclosed a breach that had compromised the data of tens of thousands of individuals who had donated to charities like the Fred Hollows Foundation and the Cancer Council. Making matters worse, the stolen info – which also included highly sensitive employee data like police checks, child support documents, and tax file numbers – was released on the dark web.
The Office of the Australian Information Commissioner is investigating, and if found to be in violation of privacy laws, Pareto Phone could be hit with a fine of up to $50 million. The fact that some of the exposed data were at least ten years old indicates that Pareto Phone was holding on to data that, according to privacy regulations, should have been destroyed long ago. An ex-employee told ABC, "The client would send us millions of rows of data. I don't ever recall that data being removed from our system." The former employee, who asked to remain anonymous, also said Pareto Phone failed to reach out to the staffers impacted in the breach, adding insult to injury for the now unemployed workers.
US Supreme Court considers government involvement in social media content moderation.
In an effort to fight misinformation, the US Supreme Court on Friday decided to pause a ruling restricting government interference with social media content. The New York Times reports that the Court has also decided to hear the Biden administration’s appeal on the ruling, reopening a case that questions where content moderation ends and government censorship begins. Justices Samuel Alito Jr, Clarence Thomas, and Neil Gorsuch voted against the pause, Alito stating, “Government censorship of private speech is antithetical to our democratic form of government, and therefore today’s decision is highly disturbing.”
Just last month the US Court of Appeals for the Fifth Circuit determined that the federal government had violated freedom of speech by pressuring online platforms to remove posts about COVID-19 and political content considered misinformation. The plaintiffs claimed, “The government’s incessant demands to platforms were conducted against the backdrop of a steady drumbeat of threats of adverse legal consequences from the White House, senior federal officials, members of Congress and key congressional staffers — made over a period of at least five years.” Solicitor General Elizabeth Prelogar voiced her disagreement with the panel’s decision, writing, “It is undisputed that the content-moderation decisions at issue in this case were made by private social media companies, such as Facebook and YouTube.”
Sensitive Irish National Police data found unsecured on the web.
Cybersecurity researcher Jeremiah Fowler has discovered an unprotected database containing the private data of thousands of Irish motorists collected on behalf of the Garda Síochána, or the Irish National Police. AS VPNmentor explains, the data were collected by various towing and storage companies working as contractors for the police to support vehicle detention. The 521,043 exposed documents amount to 271.8 GB of data including notices of automobile seizure, destruction notices, copies of identification documents, and certificates of vehicle registration, many of which were marked confidential. Full debit card numbers could be seen, which Fowler explains could “lead to unauthorised fraudulent charges.”
It’s unclear how long the data were exposed, but the info dates back as far as 2017, and Fowler estimates that approximately 150,000 individuals are impacted. Fowler disclosed the issue to the Irish police as soon as he discovered it, and the database was secured later that day. The Irish Independent reports that the Gardaí say they are not responsible for the breach, but that it was the towing companies’ IT service provider. A garda spokesperson stated, “Under An Garda Síochána’s contract with individual towing companies, there are clear obligations on individual towing companies to protect any information supplied to them by An Garda Síochána including personal data.” result of a software error at the IT services company used by the towing companies involved.
The IT services company explained that a mistake was made when applying a new release of software for the data service provided to the firms, and that when notified of the issue the company responded in just over an hour. However, the Data Protection Commissioner (DPC) says there’s some question as to who is considered the data controller in this case, and has opened an investigation to determine who is ultimately responsible for the breach.
New guidance from CISA helps SMBs protect the supply chain.
The US Cybersecurity and Infrastructure Security Agency (CISA) has issued a new resource guide to provide small and medium-sized businesses in the information and communications technology (ICT) industry with advice on minimizing supply chain risk. Titled “Empowering Small and Medium-Sized Businesses (SMB): A Resource Guide for Developing a Resilient Supply Chain Risk Management Plan,” the document was developed by the ICT Supply Chain Risk Management (SCRM) Task Force, for which CISA serves as one of three chairs.
As CISA explains, SMBs often lack resources and expertise to tackle the complex and costly needs of SCRM. Mona Harrington, CISA Assistant Director for the National Risk Management Center, stated, "In acknowledging the resource challenges faced by small and medium-sized businesses amidst today's complex supply chain risks, we're committed to offering vital support. Our unique qualifications, along with valuable partner collaboration in crafting this guide, underscore our dedication to these businesses' role in enhancing ICT supply chain resilience."
Nonprofit awarded millions to support cybersecurity education.
This week CISA also awarded the nonprofit cybersecurity workforce development organization CYBER.ORG with $6.8 million in funding through the Cybersecurity Education and Training Assistance Program. The financial support is intended to help
CYBER.ORG in its work promoting cybersecurity literacy and career exploration opportunities for elementary and secondary level students.
CISA hopes such advocacy will help address the nation’s shortage of professionals in the cybersecurity workforce, which currently is in need of over 660,000 workers. CYBER.ORG’s enrollees include over 30,000 US teachers, allowing its content platform to reach millions of students across the country.
CISA Director Jen Easterly said, “We are proud and excited to continue our work with CYBER.ORG, who are great partners in our efforts to accelerate essential cybersecurity learning and training. Their work in K-12 education will play a vital role in creating excitement in our youth to pursue a future career in cybersecurity.”
European Commission ad campaign promotes proposed content scanning laws.
An Amsterdam-based PhD researcher has unearthed a suspicious social media advertising campaign that appears to be an attempt by the EU government to influence users’ opinions about government scanning of digital content. The ads, paid for by the European Commission, appeared on social media platform X (the artist formerly known as Twitter), and included statistics from a survey on child sexual abuse and online privacy that indicate Europeans support such scanning, but researcher Danny Mekić says the findings were gathered by misleading the participants.
The ad campaign also apparently violated X’s terms of service by targeting users with particular religious beliefs and political orientations. What’s more, the campaign was launched just one day after the EU Council failed to secure sufficient support for the proposed scanning legislation, known as Chat Control or CSA Regulation (CSAR), and the ads were displayed in just the EU countries that rejected the proposal.
CSAR would require digital platforms like Signal, WhatsApp, and other messaging apps to detect and report any trace of child sexual abuse material on their systems and in their users’ private chats, but privacy advocates opposed the legislation, saying it essentially amounts to mass government surveillance. Mekić says, “I think it is fair to say that this was an attempt to influence public opinion in countries critical of the indiscriminate scanning of all digital communications of all EU citizens and to put pressure on the negotiators of these countries to agree to the legislation If the European Commission, a significant institution in the EU, can engage in targeted disinformation campaigns, it sets a dangerous precedent.”
EU lawmaker Sophie in ’t Veld agreed, telling Wired, “There’s an inexplicable obsession with this file [CSAR] in the Commission. I don’t know where that comes from. Why are they doing [the campaign] while the legislative process is still ongoing?” The findings would support allegations that wealthy AI companies and advocacy groups have been putting money behind a campaign in support of CSAR. While Commissioner Ylva Johansson has not yet commented on Mekić’s findings, in the past she has asserted that the Commission has committed no wrongdoing when it comes to bolstering support for CSAR legislation.
New findings underscore chronic abuse of Section 702 data.
Wired reports that there’s been a new development in the ongoing debate over whether a powerful US intelligence surveillance tool should be renewed. Known as Section 702, the tool allows government intelligence agencies access to foreign communications, and is set to expire at the end of the year. While intelligence officials consider Section 702 an indispensable tool to uphold national security, privacy advocates say Section 702 has been abused to allow unlawful surveillance of US citizens.
While Section 702 was found to have targeted over 246,000 “non-US persons" last year, the tool allows the National Security Agency (NSA), intentionally or not, to gather communications belonging to Americans even when they are not directly involved in an investigation. Last month Independent government privacy watchdog the Privacy and Civil Liberties Oversight Board (PCLOB) published a report that reveals several new noncompliant uses of data collected in Section 702’s name.
Specifically, staffers at NSA were found to be using the data to track individuals that the report defined as “love interests,” and that the practice was so frequent that it earned the internal designation “LOVEINT.” PCLOB’s report says that as recently as 2022 an NSA analyst conducted illegal searches of Section 702 data to gather info on two individuals they'd encountered “through an online dating service.” Of course, the fact remains that even aside from the alleged stalking, the tool’s massive access to citizen data makes it a source of much debate.
US Senator Chuck Grassley, a Republican from Iowa, stated, “The US government’s incredible surveillance powers are intended to keep Americans safe from global threats, but we’ve seen time and again how officials have misused this authority at the expense of Americans’ civil liberties.” While supporters of Section 702 say oversight can keep such abuses in check, the recent discoveries, as well as historical data like a 2013 letter from the NSA’s then-top internal watchdog George Ellard, shows that misuse of the data has long flown under the radar.
Proposed CFPB banking rules have privacy and security implications.
On October 19th, 2023, the US Consumer Financial Protection Bureau (CFPB, an independent agency responsible to the Federal Reserve) proposed a rule that would affect how financial institutions handle their customers' data. The Personal Financial Data Rights rule would give consumers more control over the data they share with institutions, and it would impose certain restrictions on how those institutions handle those data. It would in particular prevent firms from "misusing or wrongfully monetizing the sensitive personal financial data." The authority for the proposed rule is Section 1033 of Dodd-Frank.
The proposed rule would, in giving consumers more control over the personal information they share with banks and other financial institutions, enable consumers to move to better-value providers. The CRPB explains these benefits under four heads. Consumers would:
- "Get their data free of junk fees: Banks and other providers subject to the rule would have to make personal financial data available, at no charge to consumers or their agents, through dedicated digital interfaces that are safe, secure, and reliable.
- "Have a legal right to share their data: People would have a legal right to grant third parties access to information associated with their credit card, checking, prepaid, and digital wallet accounts. This type of data can help firms provide a wide range of products and services, including cash flow-based underwriting that stands to improve pricing and access across credit markets. When these firms offer a desired product or service, people would be able to switch providers more easily. They would also be able to more conveniently manage accounts from multiple providers.
- "[Be able to] walk away from bad service: Not only would the proposed rule increase competitive forces among financial institutions, it would also enable people to walk away from bad services and products. People can become trapped by providers that hold their data, but this proposal would allow them to more easily shift their data to a competitor offering better or lower priced products and services."
Thus the rule is intended to foster competition, to the consumer's benefit.
The CFPB doesn't see the proposed Personal Financial Data Rights rule as punitive or one-sided. The agency's statement outlines four ways in which better practices would benefit financial institutions as well as their customers:
- "Robust protections to prevent unchecked surveillance and misuse of data: Companies that people authorize to access data on their behalf would have to agree to certain important conditions. Third parties could not collect, use, or retain data to advance their own commercial interests through actions like targeted or behavioral advertising. Instead, third parties would be obligated to limit themselves to what is reasonably necessary to provide the individual’s requested product.
- "Meaningful consumer control: The proposal would also give people the right to revoke access to their data. When a person revokes access, the proposal would require that data access end immediately, and deletion would be the default practice. Access can be maintained for no more than one year, absent the individual consumer’s reauthorization.
- "A move away from risky data collection practices: Many companies currently access consumer data through screen scraping, which often requires people to share their usernames and passwords with third parties. This proposal seeks to move the market away from these risky data collection practices.
- "Fair industry standard-setting: Instead of providing detailed technical standards, the rule contains several requirements to ensure industry standards are fair, open, and inclusive. The CFPB intends to assess future standards developed by the private sector under the terms described in the rule."
The Personal Financial Data Rights rule will be implemented in phases, with larger institutions being the first to fall under it. Comments are invited, and should be submitted by December 29th of this year. For more on the proposed rule, see CyberWire Pro.
FCC proposes a return to net neutrality.
The US Federal Communications Commission (FCC) is moving toward a return to net neutrality. The Wall Street Journal characterizes the proposed regulation as treating Internet service providers like utilities. The regulations would prevent carriers, for example, from giving favorable treatment to some content providers.
US states sue Meta for alleged abuses targeting children.
Meta is being sued by a coalition of forty-one US states and the District of Columbia for allegedly incorporating addictive features that target minors in its social media platforms. The plaintiffs claim that Meta misled the public about the dangers of Facebook and Instagram for young users and intentionally marketed its products to children under the age of 13, who are prohibited from the platforms by Meta’s policies as well as federal law.
The Washington Post explains that the litigation comes on the heels of unsuccessful settlement talks with the tech giant that followed a multi-year investigation into how Meta’s practices affect the mental health of young users. The goal of the lawsuits is to force Meta to make policy changes that will lessen its platforms’ negative impact on minors and financially penalize the company for its practices. The suits claim, “Despite overwhelming internal research, independent expert analysis, and publicly available data that its Social Media Platforms harm young users, Meta still refuses to abandon its use of known harmful features—and has instead redoubled its efforts to misrepresent, conceal, and downplay the impact of those features on young users’ mental and physical health.”
The Wall Street Journal adds that the states’ evidence includes Meta documents made public by ex-employee Frances Haugen, which included copious internal research into the behaviors of teen users and Meta’s work to make its products more attractive to them. Meta spokesperson Liza Crenshaw responded, “Since this investigation has begun, we have engaged in a meaningful dialogue with the attorneys general regarding the ways Meta already works to support young people on its platforms, and how Meta is continuously working to improve young peoples’ experiences. We’re disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.”
Attorneys General Jonathan Skrmetti of Tennessee, a Republican, and Phil Weiser of Colorado, a Democrat, led the investigation into Meta’s practices, and they say the states involved consider Meta’s products detrimental to public health. Skrmetti stated “We had a conference six months ago working on this, we had over 100 people there. That’s tobacco-suit level, opioid-suit level commitment.” As Wired notes, Instagram was found to be using five features to lure teenagers to the platform: recommendation algorithms, likes, notifications, photo filters, and the infinite scroll system of Instagram’s feeds.
Paul Bischoff, privacy advocate at Comparitech, thinks it unlikely the plaintiffs will win this one. “Does Meta design Facebook and Instagram to keep users on them for longer and repeatedly coming back?" Bischoff asked rhetorically. "Absolutely! Is that illegal? No. Are design decisions specifically targeted at children? Probably not. Is Meta the only company making addictive apps? Absolutely not. While I fully understand the concern that parents have for their children, I don't think this lawsuit will hold up in court. The government and corporations will never be substitutes for parental supervision and guidance.”
Chris Hauk, consumer privacy champion at Pixel Privacy, thinks that some positive incentives may emerge from the litigation, whatever its outcome. “I think that it is quite possible that lawsuits like this one may result in app warnings, much like the smoking lawsuits in the 60s and later led to warnings on cigarette packs. If Facebook did know that Instagram was addictive and that it made users feel bad about themselves and still did nothing to warn users about this possibility, they should be penalized in some way, and should possibly have restrictions placed on apps like Instagram to limit exposure to teenagers and younger. I have always had concerns about how much personal information apps like Instagram and Facebook collect from underage users, as well as how much personal info those younger users inadvertently share while online.”