At a glance.
- Tim Hortons held in violation of Canadian privacy regulations.
- Voice identification technology.
- Australian health data exposed.
Where did you buy that donut, eh?
According to a two-year investigation carried out by the Office of the Privacy Commissioner of Canada (OPC), the official app of Canadian coffee-and-donut chain Tim Hortons violated the country’s privacy laws by regularly tracking and recording the locations of its users even when the app was not in use and without proper consent. Privacy Commissioner Daniel Therrien told Reuters, “Tim Hortons clearly crossed the line by amassing a huge amount of highly sensitive information about its customers.” Started in 2020, the joint investigation conducted by federal and provincial authorities highlights the pitfalls of what Therrien called "poorly designed technologies," as the app lacked a privacy management program that could have helped the company prevent the issues. According to the report released Wednesday, the tracking data was collected by third-party service provider Radar for the purpose of targeted advertising and product promotions, but apparently Tim Hortons never actually used the data. What’s more, the contract with Radar contained “vague and permissive” language that could have allowed Radar to use the personal data in an aggregated or de-identified form for its own business. "While we accept that Radar did not engage in a use or disclosure for its own purposes, the contractual language in this case would not appear to constitute adequate protection, by Tim Hortons, of Users' personal information," the report states. CBC reports that Tim Hortons removed the location-tracking technology from the app in August 2020, and has also agreed to delete all granular location data and to have any third-party service providers do so as well. The company also says it will establish a privacy management program for all of its apps to ensure they are compliant with federal and provincial privacy laws.
The value of a voice.
Wired explores the world of voice recognition technology, and the multi-billion dollar industry that has sprung up around it. Advances in artificial intelligence and machine learning have made it possible to detect not just what you’re saying, but your identity, your mood, and even the shape of your face, all from the sound of your voice. Siri and Alexa have long been able to identify a user’s voice, TikTok has started collecting users’ “voiceprints,” and call centers are using voice recognition to determine callers’ behaviors and emotions. As the value of a voice grows, privacy researchers are racing to find ways to protect users from having their voices used against them. Emmanuel Vincent, a senior research scientist specializing in voice technologies at France’s National Institute for Research in Digital Science and Technology, says a user’s voice can reveal details about their emotions and even their medical condition. “These additional pieces of information help build a more complete profile—then this would be used for all sorts of targeted advertisements.” As well, some hackers have even found ways to clone a victim’s voice in order to impersonate them. Researchers are exploring various ways of protecting voice data, including obfuscation, distributed and federated learning, and speech anonymization. As well, privacy legislation like the EU’s General Data Privacy Regulation include voices in protected biometric data, and companies like McDonald’s and Amazon have already been scrutinized by courts for their use of voice data.
Australian health data exposed in third-party breach.
CTARS, a cloud-based client management system used by the Australian National Disability Insurance Scheme, has disclosed it experienced a data breach on May 15, and the stolen data has already been published on the dark web. CTARS stated, "Although we cannot confirm the details of all the data in the time available, to be extra careful we are treating any information held in our database as being compromised.” Though CTARS says the volume of data collected makes it difficult to pinpoint exactly what data were compromised, Have I Been Pwned owner Troy Hunt, says at least 12,000 email addresses were impacted, ZDNet reports. Hunt tweeted, “This includes information such as suicide attempts. Mental health issues. Drug use (both prescription and illicit). Violent behaviour. Sexual abuse…This has been published to a hacking forum and accessed by an untold number of people. It's horrendous." In its official apology, CTARS, however, dismissed the idea that a cybercriminal might be interested in patients’ health information. "Health and other sensitive personal information by itself is generally not useful to a cyber-criminal," the company stated, then directed anyone who might be experiencing distress to consult with a healthcare professional.