At a glance.
- Mobile prescription app GoodRx apologizes for oversharing and revises its practices.
- Newly deployed facial recognition technology results in seven false arrests.
- Technical doubts arise about the accuracy of facial recognition.
GoodRx says it's sorry about oversharing customers' data, and that it will henceforth do better.
GoodRx, a discount prescription app that offers patients significant savings on medication (and that had been recommended by Consumer Reports on those grounds) has been found selling patient information to Facebook, Google, and other marketing firms. Consumer Reports has changed its recommendation to include a caution about privacy. Naked Security puts the number of Internet companies GoodRx had shared patient data with at twenty; the apparent use case was to enable the companies to serve ads likely to be of interest to the users. Apparently, and contrary to many people's assumptions, this sort of information isn't protected under the US Health Insurance Portability and Accountability Act (HIPAA).
GoodRx said, in a blog post, that while it had been and remained committed to protecting its users' privacy, it realized after Consumer Reports prompted it to re-examine its practices that in the case of its relationship with Facebook at least it had fallen short of its ideals. Henceforth it will ensure that personal medical information won't be shared with Facebook, that it will continue to anonymize data shared with Google, that it will ensure its audited agreements with other third parties continue to "operate at the highest standards of privacy," and that it will roll out compliance with CCPA protections nationwide. It's also appointed a new vice president of data privacy to ensure GoodRx stays on the privacy straight-and-narrow.
False positives in London's facial recognition system.
The Metropolitan Police announced in January their intention of deploying live facial recognition technology at "specific locations" around London with a view to tackling "serious crime." That system was deployed last week, and at Oxford Circus, at least, the initial results seem to have been discouraging. Computing writes that seven quite innocent people were misidentified and apprehended. Computing quoted the surveillance- and AI-skeptical group Big Brother Watch, which complained on Twitter that 86% of the individuals the automated scans flagged as wanted in fact represented false positives, and that 71% of those misidentifications led police to stop and identify the people so flagged.
Humans compensate for distance, but cameras have difficulty.
Psychologists are resurfacing the results of a study of facial recognition technology, Naked Security reports, in ways that call into question the reliability of judgments made on the basis of the current state of the art. The apparent configuration of a face, that is, the distance separating its various features, changes with distance. When humans recognize a familiar face, "perceptual constancy" corrects for the apparent changes. When we don't know the face, recognizing it becomes markedly more difficult. The problem with automated facial recognition is that it lacks, so far, at least, the corrective perceptual constancy. Or to put it another way, to the camera, every face is the face of a stranger. A 2017 paper in Cognition, "Camera-to-subject distance affects face configuration and perceived identity," outlines the difficulties of correctly recognizing facial configuration.