At a glance.
- Water armies across the Taiwan Strait.
- Pakistan blocks access to Wikipedia.
- Normalizing an illegal occupation: organization charts and sham elections.
- A kind word for the bots.
Water armies across the Taiwan Strait.
Taiwan News reports that Beijing has marshaled an "Internet water army" to post harassing comments to Facebook pages belonging to senior Taiwanese politicians. What's a "water army?" It's a collection of people paid to establish accounts and post comments aligned with a government's interest. The Chinese Communist Party (CCP) has, according to the Taiwan News, hired marketeers to assemble the water army. "The CCP has hired online marketing companies, which have hired people to engage in cyberattacks, and at least 825 abnormal accounts have been discovered."
Pakistan blocks access to Wikipedia.
As threatened, Pakistani officials have blocked the free online encyclopedia website Wikipedia due to the presence of “sacrilegious” content. Last week the Pakistan Telecommunication Authority gave the operators of Wikipedia forty-eight hours to remove the content in question, but when some of the content was still present after the deadline, the authority followed through on blocking access to the site. Malahat Obaid, spokesperson for Pakistan Telecommunication Authority, told Bloomberg that the authority will consider removing the block if talks with Wikipedia officials result in the complete removal of the content.
Normalizing an illegal occupation: organization charts and sham elections.
If reorganization and election scheduling can count as propaganda of the deed, here are two recent examples.
The UK's Ministry of Defence noted a change in Russian military organization. "The Russian military has formally integrated occupied areas of Ukraine into its Southern Military District. On 03 February 2023, Russia state news agency TASS reported that the Donetsk and Luhansk People’s Republics and the Zaporizhzhia and Kherson regions are being placed under the three-star command which is headquartered in Rostov-on-Don. This follows Defence Minister Sergei Shoigu’s January announcement that military expansion would include the establishment of ‘self-sufficient force groupings’ in Ukraine. The move highlights that the Russian military likely aspires to integrate newly occupied territory into a long-term strategic posture. However, it is unlikely to have an immediate impact on the campaign: Russia currently deploys forces from across all of Russia’s military districts, commanded by an ad hoc deployed headquarters." The reorganization is unlikely to have any tactical or operational significance. It serves rather as a further gesture toward the normalization of Russia's annexation of occupied Ukrainian territory, and providing a legal fig leaf to cover Russia's war of aggression, recasting that war as defense of Russia proper, should Ukraine continue to retake territory.
A further sign that this is the case was announced last week. The UK's MoD Monday morning reported, "On 01 February 2023, Russian Federation Council chair Valentina Matvienko said that regional elections will take place in the newly annexed areas of Ukraine on 10 September 2023. Incorporating the elections into the same day of voting which is scheduled across Russia highlights the leadership’s ambition to present the areas as integral parts of the Federation. This follows continued efforts to ‘Russify’ the occupied areas, which include revision of the education, communication, and transport systems. While meaningful democratic choices are no longer available to voters at even regional level elections in Russia, leaders will likely make the self-vindicating argument that new elections further justify the occupation." The timing is significant. If the conquered Ukrainian provinces are really now part of Russia proper, why wouldn't elections be held there on schedule?
Artificially intelligent chatbots and allied technologies have attracted enthusiasm, competition, and concern reminiscent, on a smaller scale, of the dot-com mania at the turn of this century. Right now the two big competitors are Microsoft's ChatGPT, ahead by a neck, and Google's more recently released Bard. They're both pretty plausible, but both of them have stumbled a bit, too. ChatGPT seems, the Wall Street Journal reports, to need some help with math problems (maybe get it a calculator). And Bard embarrassed Google in its own ad. According to Reuters, some questions about the James Webb Space Telescope intended to display the AI chatbot as a knowing savant showed that Bard wasn't up to the task either (maybe Bard could've Googled those questions). But the potential for deception remains a concern. BlackBerry speculates that nation-state services are already working on attacks based on the new AI capabilities.
Nabil Hannan, Managing Director at NetSPI, commented on the use and abuse of AI:
"With the likes of ChatGPT, organizations have gotten extremely excited about what’s possible when leveraging AI for identifying and understanding security issues—but there are still limitations. Even though AI can help identify and triage common security bugs faster – which will benefit security teams immensely – the need for human/manual testing will be more critical than ever as AI-based penetration testing can give organizations a false sense of security.
"AI isn’t perfect. In many cases, it may not produce the desired response or action because it is only as good as its training model or the data used to train it. As more AI-based tools emerge, such as Google’s Bard, attackers will also start leveraging AI (more than they already do) to target organizations. Organizations need to build systems with this in mind and have an AI-based 'immune system' (or something similar) in place sooner rather than later, that will take AI-based attacks and automatically learn how to protect against them through AI in real-time."
We received some comment from NetSPI on the implications and potential of this kind of artificial intelligence. Nick Landers, NetSPI's VP of Research, addressed the commercial potential of AI:
"The news from Google and Microsoft is strong evidence of the larger shift toward commercialized AI. Machine learning (ML) and AI have been heavily used across technical disciplines for the better part of 10 years, and I don’t predict that the adoption of advanced language models will significantly change the AI/ML threat landscape in the short term – any more than it already is. Rather, the popularization of AI/ML as both a casual conversation topic and an accessible tool will prompt some threat actors to ask, 'how can I use this for malicious purposes?' – if they haven’t already.
"However, the larger security concern has less to do with people using AI/ML for malicious reasons and more to do with people implementing this technology without knowing how to secure it properly. In many instances, the engineers deploying these models are disregarding years of security best practices in their race to the top. Every adoption of new technology comes with a fresh attack surface and risk. In the vein of leveraging models for malicious content, we’re already starting to see tools to detect generated content – and I‘m sure similar features will be implemented by security vendors throughout the year.
"In short, AI/ML will become a tool leveraged by both offensive and defensive actors, but defenders have a huge head start at present. A fresh cat-and-mouse game has already begun with models detecting other models, and I’m sure this will continue. I would urge people to focus on defense-in-depth with ML as opposed to the 'malicious actors with ChatGPT AI' narrative."
Cody Chamberlain, NetSPI's Head of Product, distinguishes adversarial from offensive AI:
"When considering the security gaps these new tools from Google and Microsoft present to the threat landscape, it’s best to consider security approaches based on two implications of AI in cyber: Adversarial AI and Offensive AI. When looking at Adversarial AI, the data is only as good as its training model, which opens up attack scenarios for poisoning models, introducing bias, etc. Organizations must perform extensive threat models against their implementations to combat these gaps – thinking like the hacker. When performing extensive testing of the data supply chain, organizations can better determine who can access it and how they can validate its integrity.
"On the other hand, Offensive AI can be used as a toolkit for attackers. So, to protect against malicious activity, organizations need to be able to identify the usage of AI as part of different test attack scenarios. We know that attackers are already using AI, so having the tools ready to effectively defend against these larger attacks is key. OpenAI and other researchers are developing fingerprinting methods and identifying AI-generated data as a defense tool, but efficacy could be higher. In many ways, we’re going to be implementing an AI arms race between defenders and attackers this year."
A kind word for the bots.
Jason Kent, Hacker in Residence, Cequence Security, points out that bots can have benign as well as sinister uses:
"Twitter’s “Good Content Bot Service” API will be free. The question you have to ask yourself is, why let any bot operate? Well, the answer is kind of interesting and as per the normal flow of the Internet, it involves cats. Elon Musk has said he doesn’t like the 'bots' on Twitter.
"Just like witches, there are good bots and bad bots. So, when Elon went out on a bot hunt, he ended up pushing an agenda that ended all bots. But what if you had built a cat door that would tweet a picture of the cat, every time it went through the door? This isn’t a human doing this, this is a Twitter bot account. Not trying to sway anyone’s political opinion, tell us health advice with no grounds in science, or track anyone’s private jet.
"There are loads of good bots on Twitter, they help people find resources and bring joy to many people’s lives. Elon has had to, just like most of his other announcements, come up with a way to keep the joy in a platform that people aren’t finding much joy in and the leadership seems to be crushing.
"The good news is, cat bots stay, the bad news is we don’t know where the line is drawn and what the bad bots will be doing. Obviously, they’ll just have to find another way. It's rumored that many twitter bots have API keys that they acquired that still work or simply use the web front end to make calls into the platform."