At a glance.
- The AI debate continues.
- CISA recommends FCC covered list for risk management.
- FTC says Facebook is not complying with privacy agreement.
The AI debate continues.
In response to growing anxieties about the inherent dangers of artificial intelligence, the White House is hosting a meeting today with the CEOs of Google, Microsoft, Anthropic, and OpenAI to discuss the Biden administration’s plan for promoting responsible AI development. A senior administration official told the Washington Post that Vice President Kamala Harris, along with other government representatives, will use the meeting as an opportunity to reiterate President Joe Biden’s call for industry leaders to ensure the safety of AI products before releasing them to the public. The White House also announced it will be investing in “trustworthy” AI by supporting tech companies that voluntarily agree to conduct a public assessment of their AI systems. For federal agencies, the Office of Management and Budget will release draft guidance on safe AI practices.
Experts continue to express their concerns about the risks posed by the rapidly growing AI industry. Lina Khan, chair of the Federal Trade Commission (FTC), writes in the New York Times, “The full extent of generative A.I.’s potential is still up for debate, but there’s little doubt it will be highly disruptive.” She compares the current growth of AI to the early days of the Web 2.0 era, when tech companies first began profiting off of personal user data. “What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.” She says AI is no different, and that government agencies like the FTC must work now to make sure this new tech does not corrupt open, fair market competition.
Breaking Defense reports that Craig Martell, the Defense Department’s chief digital and AI officer, spoke yesterday at AFCEA’s TechNet Cyber conference, and he also voiced his concerns about generative AI. Referencing AI-powered chatbot ChatGPT, he said, “It has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So you believe it even when it’s wrong… And that means it is a perfect tool for disinformation…We really need tools to be able to detect when that’s happening and to be able to warn when that’s happening.”
Microsoft’s Chief Economist Michael Schwarz shared a similar sentiment when speaking at a World Economic Forum panel in Geneva on Wednesday. “I am confident AI will be used by bad actors, and yes it will cause real damage,” Schwarz said. “It can do a lot of damage in the hands of spammers with elections and so on.” GovTech reports that Microsoft is already working to create protections that will help mitigate that potential damage, but that lawmakers should refrain from directly regulating AI training sets and possibly hampering the benefits of AI. “We, as mankind, ought to be better off because we can produce more stuff with less work.”
CISA recommends FCC covered list for risk management.
The US Cybersecurity and Infrastructure Security Agency (CISA) says organizations should use the Federal Communications Commission’s (FCC) Covered List when developing their risk management plans. The Covered List consists of communications and service providers that the FCC has determined could pose a potential risk to national security, some of which include Huawei, ZTE, and Dahua. Mike Parkin, senior technical engineer at Vulcan Cyber, told Infosecurity Magazine that although avoiding companies on the covered list could be a costly goal, both government and civilian institutions would be smart to follow CISA’s advice. “Organizations that are bound to CISA’s directives are required to follow them and take the necessary actions, while for civilian organizations, CISA directives are simply a recommendation,” Parkin stated. “However, from a cybersecurity perspective, they have historically been sound recommendations and are well worth following.”
FTC says Facebook is not complying with privacy agreement.
Yesterday the US Federal Trade Commission proposed that Facebook’s parent company Meta should be banned from monetizing the data of minors. As part of a 2020 settlement following the FTC’s investigation of the Cambridge Analytica data scandal, CNBC explains, Facebook agreed to regular independent assessments of its privacy program. The FTC says a recent assessment found “several gaps and weaknesses in Facebook’s privacy program” that posed “substantial risks to the public.” According to the Commission, Facebook has been misrepresenting the parental controls on its Messenger Kids app, which could amount to a violation of the Children’s Online Privacy Protection Rule (COPPA). In addition to the child data-monetization ban, the FTC says the company should not be allowed to create any new products or services until the assessor confirms Meta is in full compliance with the terms of the privacy agreement. Meta has thirty days to respond, after which the FTC will determine whether updating the 2020 order “is in the public interest or justified by changed conditions of fact or law.” Facebook spokesperson Andy Stone commented on the FTC’s findings, “Despite three years of continual engagement with the FTC around our agreement, they provided no opportunity to discuss this new, totally unprecedented theory. We have spent vast resources building and implementing an industry-leading privacy program under the terms of our FTC agreement. We will vigorously fight this action and expect to prevail.”