At a glance.
- US Federal Trade Commission emphasizes notifications of breaches originating in apps.
- Russian election manipulation, domestic front.
- Military cyber doctrine, in the UK and China.
- UN cautions on AI's risk to human rights.
FTC: health apps must notify consumers about breaches.
The US Federal Trade Commission (FTC) ruled Wednesday that apps collecting health data—like sleep, fitness, diet, mental health, and fertility apps—are covered by the 2009 Health Breach Notification Rule and must disclose improper data access to users or pay a $43 thousand daily fine, the Hill reports. FTC Chair Lina Khan commented, “Digital apps are routinely caught playing fast and loose with user data, leaving users’ sensitive health information susceptible to hacks and breaches…it is critical that the FTC use its full set of tools to protect Americans.”
Khan pointed to “the commodification of sensitive health information” combined with the “growing prevalence of surveillance-based advertising” as “a more fundamental problem” deserving of FTC attention. “[T]he Commission should be scrutinizing what data is being collected in the first place and whether particular types of business models create incentives that necessarily place users at risk,” she said.
Russia’s election, and the Kremlin’s cyber machinations.
Foreign Policy details the Kremlin’s “dirty tricks” to hold onto power in advance of the parliamentary elections set for this weekend. Atlantic Council calls the elections “legitimization rituals” for President Putin, and “flash points to crystallize dissatisfaction” for the opposition.
In addition to piloting disinformation campaigns against opposition parties, labeling independent news outlets and civil organizations ‘foreign agents,’ arresting citizens for social media posts, and busing in “newly minted” voters, Moscow is cracking down on a Navalny-inspired digital “Smart Voting” tool, which helps voters identify candidates in their district with the best odds of defeating Putin’s pals.
Government officials are pressuring Apple and Google to censor the platform and calling in the US ambassador for questioning while accusing the US Defense Department of bankrolling the initiative. Russia’s top search engine delisted the site. The Apple app store is reportedly blocked and crashing in country, and common VPNs have been shuttered. Google Docs was also temporarily disabled, as a list of opposition candidates circulated. After Smart Voting was hacked, users endured threats and phony endorsements along with visits from the police.
By force or by course, tech geeks’ rising military importance, from London to Beijing.
The Daily Swig looks at the UK military’s new prioritization of cyberspace through investments in offensive and defensive capacities and digital infrastructure safeguards. As the armed forces turn their sights to IoT, AI, robotics, data analytics, and machine learning technology as ‘force multipliers,’ workforce development strategies are shifting as well. Strategic Command Commander General Sanders explained, “I have more need of Q than I do 007 or M.” Modernizing recruitment and training efforts, professional standards and incentives, and industry and Government partnerships to develop ‘penta-phibian’ troops adept in all five domains of warfare represents Her Majesty’s Forces’ latest challenge.
Express presents the just-announced US-UK-AU Aukus initiative as in part a response to Beijing’s increasing militarization of cyberspace since a 2015 Chinese Ministry of National Defense white paper plugged plans to strengthen the country’s cyber services. Following the document’s publication, President Xi made cyber operations equal to other military operations, inaugurated a Joint Force Command to more broadly incorporate cyber capacities, and set up a regulatory Cyber Security Association of China.
UN proposes controlling AI.
The United Nations High Commissioner for Human Rights has called for an immediate moratorium on the development and deployment of artificially intelligent technologies that "pose a serious risk to human rights until adequate safeguards are put in place." Details of the proposed moratorium may be found in the Human Rights Council's report, but the concerns center around the potential for automation of bias.
Patricia Thaine, CEO of Private AI, emailed to express agreement, and to advocate "privacy by design" in the development of AI systems:
"The misuse of AI is undoubtedly one of the most pressing human rights issues the world is facing today—from facial recognition for minority group monitoring to the ubiquitous collection and analysis of personal data. 'Privacy by Design' must be core to building any AI system for digital risk protection. Thanks to excellent data minimization tools and other privacy enhancing technologies that have emerged, even the most strictly regulated data [healthcare data] are being used to train state-of-the-art AI systems in a privacy-preserving way."