At a glance.
- Report: Verizon data dumped on clear web forum.
- The pitfalls of using algorithms to calculate risk.
Report: Verizon data dumped on clear web forum.
Someone on a web forum has published data that are said to belong to between 7.5 million and 9 million Verizon home internet and cellular customers, but the data appear to be old, and unrelated to any breach at Verizon. The forum user claims the data was stolen from the leading mobile carrier by hackers in January. The post, which appeared on a forum dedicated to database downloads, leaks, and cracks, reads, “Today I have uploaded the Verizon Database for you to download, thanks for reading and enjoy!” The cybersecurity team at SafetyDetectives, who discovered the post, says the data does not appear to be highly sensitive, but if merged with other data could be used for identity theft or other nefarious purposes. The forum user, who goes by a handle that is apparently too offensive to publish, made the database available for download for free, and because the forum is on the clear, surface web, the data is not only available to forum members but also to anonymous users.
Filenames indicate the data were stored by Verizon prior to January 2022, and there are two tables of note, the first apparently containing data on internet subscribers, and the other composed of info belonging to cellular network users. When notified about the data dump, a Verizon spokesperson denied the data were connected to any breach of the company’s systems, and that they was older data that had already surfaced in the past. “The bad actor continues to circulate it and pretending like it’s new,” the spokesperson stated, “The fact is that it’s not. As mentioned, this was not a Verizon breach, but an incident involving a third party vendor that formerly did business with Verizon. The company had very limited customer information, and they are no longer affiliated with our company.”
The pitfalls of using algorithms to calculate risk.
Some governments have begun using machine learning algorithms to streamline and simplify what would otherwise be complicated, arduous bureaucratic tasks. However, no algorithm is perfect, which means that any efficiency they provide could come with a cost. Wired takes an in-depth look at an algorithm used in the Dutch city of Rotterdam to sniff out welfare fraud. The algorithm, designed by consulting firm Accenture and adopted by the city in 2018, generates a risk score for all of the 30,000 Rotterdam residents who collect welfare benefits. This score, which is derived from a number of factors like age, gender, language skills, and marital status, supposedly determines which residents are most likely to commit fraud. However, with the help of researchers at Lighthouse Reports, Wired found that Rotterdam’s fraud algorithm has flaws that make it not only inaccurate, but also unfair, and it was found to discriminate based on ethnicity and race. Authorities investigate the residents with the highest risk scores, sometimes suspending their benefits until the investigation is complete.
The process can be humiliating and crippling for the individual under the magnifying glass. One such resident told Wired, “It took me two years to recover from this. I was destroyed mentally.” This is even more troubling when, due to the algorithm’s flaws, it turns out the individuals under suspicion have done nothing wrong. In order for such an algorithm to process the data it receives, nuance is often flattened out of the equation, and It’s nearly impossible to remove bias from the algorithm’s calculations. For instance,17% of the variables used in Rotterdam’s scoring algorithm are based on subjective assessments from caseworkers. Tamilla Abdul-Aliyeva, a researcher on technology and human rights at Amnesty International, explains, “Even when the use of the variable does not lead to a higher risk score, the fact that it was used to select people is enough to conclude that discrimination has occurred.”