At a glance.
- Wikipedia "degraded" in Pakistan for blasphemous content.
- Wiper attacks on Ukrinform.
- What a Russian media ban means: perspective from a banned outlet.
- Industry perspectives on ChatGPT and other advanced AI.
Wikipedia "degraded" in Pakistan for blasphemous content.
TechCrunch reports that Pakistan's Internet authority yesterday "degraded" Wikipedia because of content it contains that is, in Islamic terms, sacrilegious. The Pakistan Telecommunication Authority asked Wikipedia to remove blasphemous content. Wikipedia has forty-eight hours to comply before being completely blocked in Pakistan. The online encyclopedia did not immediately respond to the takedown demand.
Wiper attacks on Ukrinform.
The Ukrainian Computer Emergency Response Team (CERT-UA) on Friday reported identifying five distinct strains of wiper malware in the networks of the Ukrinform news outlet. The strains, and the systems the affected, were: CaddyWiper (Windows), ZeroWipe (Windows), SDelete (Windows), AwfulShred (Linux), and BidSwipe (FreeBSD). The Russian hacktivist group "CyberArmyofRussia_Reborn" claimed credit in its Telegram channel for the infestations. BleepingComputer says that two of the strains, ZeroWipe and BidSwipe, represent either novel malware or, if they're existing, known strains, they're being tracked under unfamiliar names by CERT-UA.
Two weeks ago a Russian cyberattack interfered briefly with Ukrinform online broadcasts. The interest in Ukrinform offers some confirmation of the Ukrainian view that Russian cyber operations are more closely connected with influence operations than they are with tactical operations.
What a Russian media ban means: perspective from a banned outlet.
Meduza, the expatriate Russian news service that publishes in Russian and English from its headquarters in Latvia, was banned in Russia last week. Russia's Prosecutor General’s Office designated the service as an illegal, “undesirable organization” on the grounds that Meduza’s activities “pose a threat to the foundations of the Russian Federation’s constitutional order and national security." It's not, apparently, strictly speaking illegal to read Meduza in Russia (although as a practical matter it's unwise to rely on Moscow's concepts of legality) but interacting with Meduza in other ways is decidedly risky, clearly proscribed by Russian law.
Meduza offers a primer on what users in Russia (and nota bene, travelers, it's "users in Russia," not just "Russian users") might face should they run afoul of the law. "Liking" and "commenting" are grey areas, maybe not illegal stricto sensu, but it's probably safer not to do them. The same can be said of forwarding Meduza newsletters (but printing them is probably worse, and would be construed as intent to distribute). Linking to or reposting Meduza content is clearly illegal, and carries criminal penalties. "The first time a Russian national is convicted of sharing content from an “undesirable” organization, the penalty is a fine of 5,000 to 15,000 rubles (about $70 to $215). Subsequent offenses carry the risk of felony prosecution, and violators can face up to four years in prison, community service, restrictions of freedom, or a raised fine of up to 500,000 rubles (more than $7,000)."
Industry perspectives on ChatGPT and other advanced AI.
ChatGPT has drawn considerable attention for its potential abuse in the production of deep fakes that could be employed in both fraud and disinformation. Adrien Gendre, Chief Tech and Product Officer at Vade, described the linguistic capabilities of the AI:
"Hackers will use ChatGPT to develop multi-lingual communications with unsuspecting users in business supply chains. Many of the most notorious cybercriminal gangs and state-sponsored cybercriminals operate in countries like Russia, North Korea and other foreign countries. The positive of this, from a cybersecurity perspective, is that it makes personal communications from these threat actors—for example, in phishing and spear phishing emails—somewhat easier for end users to detect. With ChatGPT, that barrier is gone. This technology can develop written communications in any language, with perfect fluency. It will be very difficult for users to recognize that they are potentially communicating via email with an individual who barely speaks or writes in their language. The damage this technology will cause is almost a certainty."
Benjamin Fabre, CEO at DataDome, also cautions that the technology has poorly understood malign potential. "ChatGPT and tools like it are a slippery slope; they make it easy to build sophisticated bots -- for good or for bad. And we all know that bad bots cause chaos. For example, ChatGPT could be leveraged to run unprecedented, massive influence fraud: bots can generate millions of realistic messages that automatically post across social media platforms or in mainstream media comments to attack companies, politicians, or countries."
But it's not just a potential for disinformation. ChatGPT has many other uses. It's been used to write code, and its potential as a smart search tool has also been noted. Jerrod Piker, Competitive Intelligence Analyst at Deep Instinct, call it a "Swiss Army knife." For example, "The crypto community has latched on to this tool, and they are creating lots of useful applications such as trading bots and crypto blogs. One such trading bot was created to identify entry and exit points using simple moving averages of cryptocurrencies. This type of application could serve to automate the process of buying and selling cryptocurrencies for the masses with scary accuracy." And it has other benign uses. "Other recent positive uses include using ChatGPT to write a sample smart contract and patch vulnerabilities in existing smart contracts. On the other side of the coin, it could also be used to exploit those same vulnerabilities in smart contracts. Overall, the bot is very efficient at writing statements, blogs, and theses, and in one case, a crypto community member got it to write a song about losing all your money in crypto."
Piker notes that, like any artificially intelligent system, ChatGPT is dependent for its accuracy, its utility, and its plausibility on the data used to train it. "As is the case with any AI-based tool, the system is only as accurate as the data upon which it is modeled. There is always a chance that either its data source has been corrupted in some way or is just not accurate or true. In these cases, users may run the risk of getting inaccurate parameters for crypto trading, which could cause buyers to lose money."
There's also a great deal in the tasking. Well-constructed taskings produce better results than brute, maundering, or ill-intentioned requests. Michael Covington, VP of Portfolio Strategy at Jamf, describes the way these varying results are generated:
"As with most technologies, the actual user of ChatGPT can significantly influence what the tool produces. With a thoughtful and well-phrased question, for example, the chatbot can produce an eloquent, detailed, and accurate response. On the other hand, a misleading, vague, or malicious task can produce very different results. The tool is only as effective as the skillfulness of the user that wields it.
"While the technology behind ChatGPT has proven to aid in the advancement of malicious tooling (e.g., by creating compelling phishing content), there's no reason it can't also be used to better cybersecurity.
"As with most products, the first applications and use cases addressed establish the brand. ChatGPT is still in its infancy, and it's clear that the community of users is pushing its limits and testing its effectiveness in various areas. What we're learning is that there are some places where the tool is more effective than the community was expecting (example: test results where the chatbot performs better than the average test taker).
"It will be interesting to see how ChatGPT, and other technologies like it, can be used to better the outcomes it produces. If ChatGPT can develop an effective phishing attack, can ChatGPT also be used to identify its own phishing attacks?"
Randy Lariar, Practice Director of Big Data, AI and Analytics at Optiv, offers some thoughts on the likely direction ChatGPT will take, and the effects it will have on various technology sectors and disciplines"
"ChatGPT will help close the cybersecurity talent shortage & skills gap – Given ChatGPT’s ability to help users quickly and easily access knowledge, search for answers and write code, the technology will help close the cybersecurity talent shortage by making a single security professional more effective. It also will help reduce the cybersecurity skills gap by enabling junior personnel to take on the responsibilities of more senior professionals.
"ChatGPT will increase the risk of phishing – The AI model will provide a way for non-English speaking attackers and those with limited English to craft a phishing email with perfect spelling and grammar. It also will make it much easier for all bad actors to emulate the tone, word selection and style of writing of their intended target – which will make it harder than ever for recipients to decipher whether an email is legitimate.
"The new wave of AI is here to stay, so organizations need to look at it holistically – All companies should aim to use their data to make better decisions. ChatGPT and other next-gen AI tools can help to accelerate this. Companies should have an offensive strategy to use AI technology to improve their business. However, they also need a defensive strategy for how they’ll secure themselves from evolving security risks. Companies need to be thinking about AI as they consider updating their policies, procedures and protocols to protect against bad AI-enabled actors."