News for the cybersecurity community during the COVID-19 emergency: Wednesday, May 13th, 2020. Daily updates on how the pandemic is affecting the cybersecurity sector.
Disruptive espionage. Privacy and efficacy (again). You and the AI should talk more.
US agencies warn of Chinese espionage against COVID-19 research institutions.
A joint warning issued by the US Federal Bureau of Investigation (FBI) and the Cybersecurity and Infrastructure Security Agency (CISA) says the Bureau is actively investigating "the targeting and compromise of U.S. organizations conducting COVID-19-related research by PRC-affiliated cyber actors and non-traditional collectors." The "PRC" is of course the People's Republic of China, and "non-traditional collectors" has in earlier US Government advisories referred to students and researchers already in place at institutions who are being activated to collect. Think of non-traditional collectors as, for the most part, forming a specific kind of internal threat. Thus the espionage has allegedly moved beyond the password-spraying attacks CISA and its UK counterparts in the National Cyber Security Centre warned against last week.
The warning includes recommendations for research institutions:
- "Assume that press attention affiliating your organization with COVID-19-related research will lead to increased interest and cyber activity."
- "Patch all systems for critical vulnerabilities, prioritizing timely patching for known vulnerabilities of internet-connected servers and software processing internet data."
- "Actively scan web applications for unauthorized access, modification, or anomalous activities."
- "Improve credential requirements and require multi-factor authentication."
- "Identify and suspend access of users exhibiting unusual activity."
Note the first recommendation. You may not be interested in the spies, researcher, but the spies are going to be interested in you.
Espionage against COVID-19 research may not be confined to China. And it may be more disruptive than mere collection would be.
The Wall Street Journal writes in an exclusive that Iran as well as China is engaged in spying on organizations conducting COVID-19 research. These efforts have been in progress since January 3rd at least, and the damage they may have done could extend to more than simple theft of intellectual property. There appears to be a serious possibility of data corruption in the course of the incursions. Such corruption may have been accidental, it may have been incidental to the attackers' attempts to cover their tracks (like "a house burglar who by cleaning his own fingerprints causes inadvertent damage to the home"), or it may have been intentional. “It is difficult, and sometimes impossible, to know what motivates such malfeasance," the Journal quotes a senior US official, "but any such activity carries with it the risk of triggering accidental, disruptive effects.”
CNBC notes that research organizations inevitably expand their attack surface as more of their people work from home, and that both personal and institutional networks are likely to become targets of cyberespionage. (CNBC does mention the honor-among-thieves point-of-view that early in the pandemic took seriously various criminal and state-sponsored threat actors' avowals of their intention to leave medical, emergency, and research organizations alone, presumably for the common good. But at this point it should be safe to say that all of that stuff was so much argle-bargle and pixie dust to misdirect the rubes. Attacks on these kinds of organizations have if anything risen.)
Contact tracing apps: privacy and efficacy.
In the European Union, the European Telecommunications Standards Institute (ETSI) is working on a set of standards designed to ensure the efficacy and interoperability of any technology developed to help contain COVID-19 through data collection and analysis, ComputerWeekly reports. The aim is "to enable the development of interoperable systems to automatically trace and inform potentially infected users in addition to manual notification methods, while preserving users’ privacy and complying with relevant data protection regulations." This goal is predicated on the conviction that the most effective way to contain the spread of the disease is by using contact tracing to break the chain of transmission from infected to uninfected persons.
A critical view may be found in Foreign Policy, which offers a long, skeptical take on how likely contact tracing apps are to help control the pandemic. The essay claims that too little is known about modes of transmission to enable them to do much to help. If the technology got prognosis and transmission right, they'd be helpful, if enough people adopted the technology and used it properly. But the essay claims that the three success stories widely touted (Singapore, South Korea, and Australia) turn out to be, on closer inspection, less successful than they at first seem.
Privacy and security also remain concerns, as the EU policy shops indicated. In the UK, where trials of an NHSX-developed app have been in progress on the Isle of Wight, Parliament's Joint Committee on Human Rights has asked Health Secretary Matt Hancock to support proposed legislation that would put privacy safeguards in place for the technology. The proposed Contact Tracing (Data Protection) Bill 2020, ComputerWeekly writes, provides for the “regulation of the processing of information in respect of contact tracing for Covid-19, and for connected purposes.” It would appoint a new Digital Contact Tracing Human Rights Commissioner responsible for overseeing the privacy aspects of technologies deployed to track the spread of the disease.
The AI really doesn't know what to make of you nowadays. You're breaking its artificial heart.
It's like you don't talk anymore, and that, we hear (because there's not much to do beyond watching advice shows on daytime TV) is bad for any relationship.
Here's a consequence of the pandemic emergency it's been easy to overlook: MIT Technology Review says that artificial intelligence trained on actual human behavior has been suddenly baffled by all of your toilet paper hoarding, your strange hours, your seclusion in your basement, attic, bedroom, or other functional garret. It really doesn't know what to make of a population where what was once outlier behavior is now mainstream, when the new normal is so, so, abnormal, at least from the machine's point of view. This has been particularly evident in applications of AI to retail problems: what to expect people to buy, how likely they are to close a purchase, how consumption patterns inform inventory, etc.
A lot more human intervention is required, but many businesses who've deployed AI lack the human resources to supervise the machines. Technology Review finds the upside in all of this: "If we are looking for a silver lining, then now is a time to take stock of those newly exposed systems and ask how they might be designed better, made more resilient. If machines are to be trusted, we need to watch over them." Raise them up right: you don't want your AI to grow up sniping butts and throwing rocks at cars.