At a glance.
- UK unveils new fraud prevention strategy.
- The battle over protecting children on the web.
UK unveils new fraud prevention strategy.
UK Home Secretary Suella Braverman says Parliament is working on a new plan for fighting fraud. Over 40% of all crime in England and Wales can be attributed to fraud, making it the number one crime in the UK. Around 3.7 million cases were reported in 2022, but, as the Telegraph notes, only one in one thousand offenses resulted in a charge or summons. Braverman writes in the Telegraph, “The impact is devastating and goes far beyond just financial loss. It’s personal - and causes huge emotional strain. We must bring more fraudsters to justice…This government is going further than ever to fix that problem.” Under the new plan, the UK’s three intelligence agencies will partner with regional squads of police officers focused on fraud prevention. The Online Safety Bill will specifically crack down on cyberfraud by placing user protection in the hands of online platforms. To prevent phone scams, there’s a plan to ban cold calling and number spoofing, as well as SIM farms and other devices which allow criminals to send mass SMS messages under fake numbers. Braverman also says the government will urge law enforcement to make fraud a priority, and £400 million will be allocated to support this effort. Prime Minister Rishi Sunak voiced his support for the plan, stating, “We will take the fight to these fraudsters, wherever they try to hide. By blocking scams at the source, boosting protections for people and bolstering enforcement, we will stop more of these cold-hearted crimes from happening in the first place and make sure justice is done.”
The battle over protecting children on the web.
US Senators Richard Blumenthal, a Democrat from Connecticut, and Marsha Blackburn, a Republican from Tennessee, yesterday reintroduced the Kids Online Safety Act (KOSA), which aims to place responsibility on online platforms to keep harmful content away from children. In response to worries that a previous version of the bill would allow for over-moderation and give state Attorneys General too much power over content, this updated version offers a list of specific harms – like content promoting self harm or substance abuse – that tech companies must attempt to limit. As well, CNBC explains, the bill platforms will be subject to annual independent audits to assess risks posed to minors. This revised version of KOSA has earned the backing of groups like Common Sense Media, the American Psychological Association, and the American Academy of Pediatrics, and Blumenthal also says that Senate Majority Leader Chuck Schumer is “a hundred percent behind this bill and efforts to protect kids online.” Despite the changes, some digital rights groups still say KOSA could cause more harm than good. Evan Greer, director at Fight for the Future, says Blumenthal never responded to requests to meet with the digital rights nonprofit. “The bill still contains a duty of care that covers content recommendation,” Greer states, “and it still allows state Attorneys General to effectively dictate what content platforms can recommend to minors.” ACLU Senior Policy Counsel Cody Venzke said in a statement, “The ACLU remains strongly opposed to KOSA because it would ironically expose the very children it seeks to protect to increased harm and increased surveillance.”
Meanwhile, a new law passed in the state of Utah requires pornography sites to verify users’ ages, and online adult platform PornHub has responded by simply disabling access to its site by people from the state. Vice reports that Utah residents attempting to visit the site will now be greeted by a message from adult performer Cherie DeVille asking state lawmakers to change the measure. “Please contact your representatives before it is too late and demand device-based verification solutions that make the internet safer while also respecting your privacy,” Deville states. SB 287: Online Pornography Viewing Age Requirements, which was signed in March and went into effect on May 2, states, “A commercial entity that knowingly and intentionally publishes or distributes material harmful to minors on the Internet from a website that contains a substantial portion of such material shall be held liable if the entity fails to perform reasonable age verification methods to verify the age of an individual attempting to access the material.”
The debate over what children should and shouldn’t see on the web has been raging since the internet first emerged. The New York Times recounts how in 1996 Congress passed a telecommunications bill making it illegal to send or display “obscene or indecent” content to minors. While these anti-pornograpy rules had bipartisan backing, civil liberties groups said such measures violated freedom of speech, and that age verification would be too expensive to implement. (Nearly three decades later, PornHub would likely agree.) American Civil Liberties Union (ACLU) challenged a part of the law called the Communications Decency Act with a lawsuit, and the Supreme Court sided with the ACLU, saying it was up to parents to use content-filtering software (which was just about to hit the market) to protect their kids.