At a glance.
- Japan urged to adhere to US cybersecurity standards.
- Report finds AI could increase production of fake sexual abuse imagery.
Japan urged to adhere to US cybersecurity standards.
Japanese media are reporting that Prime Minister Fumio Kishida will be requiring all government contractors to meet US cybersecurity standards as issued by the National Institute of Standards and Technology (NIST). As cyberattacks targeting Japanese defense contractors – as well as other organizations – increase, PM Kishida has made cybersecurity a top priority of his administration. Japan’s Cybersecurity Strategic Headquarters will oversee the transition to the NIST standards, which is scheduled to be completed by March 31, 2024 and is expected to impact over one thousand contractors.
The Asia Times offers an overview of the history of Japan's cybersecurity partnership with the US, which can be traced back twenty-five years to a collaborative effort to develop and deploy Japan’s ballistic missile defense systems. A 2011 attack targeting Mitsubishi Heavy Industries (MHI), Japan’s largest defense contractor, was followed by attacks impacting Mitsubishi Electric Corp in 2019 and the NEC Corporation 2020. The surge in attacks compelled the US, which defends Japan under the Japan-US Security Treaty, to urge Japan to bolster its cybersecurity capabilities. In January 2023, Japan’s Minister of Economy, Trade, and Industry Yasutoshi Nishimura joined with US Secretary of Homeland Security Alejandro Mayorkas to sign a Memorandum of Cooperation (MOC) on Cybersecurity. It’s worth noting that a revised version of the NIST standards, which apply to contractors supplying the US Department of Defense and other government agencies, was released in May of this year, clarifying the recommendations and alleviating ambiguities.
Report finds AI could increase production of fake sexual abuse imagery.
Researchers at the Stanford Internet Observatory and Thorn say artificial intelligence tech could lead to the production of computer-generated child sexual abuse imagery (CG-CSAM) that is almost indecipherable from the real thing. The report’s abstract reads, “Advances in the open-source generative ML community have led to increasingly realistic adult content, to the point that content indistinguishable from actual photographs is likely to be common in the very near future.”
The report offers an overview of recent advancements in diffusion models, and how the public release of diffusion platforms (like DALL-E and Midjourney) has led to the creation and distribution of near-realistic sexual imagery. The researchers go on to discuss the potential societal and technical consequences of artificial CG-CSAM, which include overwhelming legal systems with reports of CSAM cases, the re-victimization of children depicted in the images, as well as grooming and sextortion. Recommended mitigations include biasing machine learning models against child nudity, improving watermarking systems to ensure that CG-CSAM can be easily identified, and the use of passive detection mechanisms. As well, active monitoring of CG-CSAM production networks or modifying industry CSAM classifications could make it harder for AI-generated sexual abuse images to be disseminated.