At a glance.
- Cozy Bear's election-themed phishbait.
- Content moderation, alleged successes.
- Content moderation, alleged failures.
- Vaccine misinformation.
- An informal fallacy is committed (this time center-left, but the left, right, and center are all potential victims).
- Meme speculation.
- Motiveless phishing?
Cozy Bear phishes with election-themed bait.
Last week Microsoft observed an ongoing spearphishing campaign launched by Nobelium, the threat actor behind the SolarWinds supply-chain attack (also tracked as APT29, that is, Russia's SVR). It's a case of spoofing, of inauthenticity. The operators obtained email credentials belonging to USAID, a US State Department international assistance agency, which they then used to distribute malicious links that purported to go to "new documents on election fraud" produced by former President Trump. Volexity has also been tracking the campaign, and attributes it "with moderate confidence" to APT29. The apparent goal is espionage, not persuasion. The inflammatory link is just click bait.
Content moderation in social media: a perceived success story...
The German Marshall Fund sees what it regards as good news: the rates at which people "engage with deceptive content" on Twitter and Facebook seem to have fallen off. "On Twitter, the drop was dramatic: a 60 percent quarterly decline in shares of content from deceptive sites by 'verified accounts.' On Facebook, interactions with content from deceptive sites dipped by 15 percent in the first quarter of 2021, reflecting a comparable decline in interactions with all U.S.-based sites on the platform. The changes were likely driven by platforms’ suspension of accounts and pages that have propagated deceptive content in the past as well as policy change such as prioritizing original news reporting within Facebook’s News Feed."
The German Marshall Fund's New Deal project divides the bad information actors into two large classes, "False Content Producers," who have a track record of repeatedly publishing demonstrably false content, and "Manipulators," who fail to gather and present information responsibly. Thus the New Deal's standards look for epistemic and rhetorical quality as opposed to authenticity or automation.
...but uneasiness about content moderation grows.
One of the surprising features of journalistic opinion and reporting on content moderation in social media is the relative paucity of anything approaching the sort of free-speech absolutism that one might have expected from media quarters. The Platformer notes that some of this may be changing, as outrage over what was permitted seems to be shifting toward outrage over what's being taken down. WIRED is feeling some uneasiness about the matter: “As companies develop ever more types of technology to find and remove content in different ways, there becomes an expectation they should use it. Can moderate implies ought to moderate. After all, once a tool has been put into use, it’s hard to put it back in the box. But content moderation is now snowballing, and the collateral damage in its path is too often ignored.”
Such concerns are unlikely to inhibit authoritarians. Russia, Computing reports, has fined Google, Facebook, and TikTok for failure to take down prohibited content.
Vaccine misinformation and disinformation.
The San Diego Union-Tribune reports that questionable claims about COVID-19 vaccines (specifically that they induce infertility) have been to a significant extent responsible for the relatively low rate at which Southern Californians are getting vaccinated. This is for the most part ordinary viral misinformation.
Some deliberate disinformation, however, has been spread about certain vaccines, particularly those developed in the West. The motive for much of it appears to be commercial, an attempt to discredit competitors to the advantage of Russian-produced vaccines. Radio Free Europe | Radio Liberty describes one network, led by a Russian businesswoman and political operator, that's taken a leading role in fabricating and disseminating such stories.
Argumentum ad hominem.
Regarding someone as a jerk isn't a refutation of everything said jerk says, even if you're correct in your judgment that the jerk is a jerk. (And who among us should cast the first stone? Where's the pure jerk out there?) Thinking that a statement must be false because it was made by a bad person is one of the traditional informal fallacies logicians have warned about for centuries, the argumentum ad hominem, the argument against the person. Students of informal logic often dismiss falling for this fallacy as a real possibility. Sure, the speaker's character has nothing to do with the truth of what the speaker says, but who would fall for that?
But apparently people do. The recent decision to investigate (or re-investigate) the origins of COVID-19, and specifically to examine the hypothesis that the emergence of the virus could be traced to the Wuhan Institute of Virology, provides an example. The Washington Post describes how the theory that the COVID-19 pandemic leaked from the Wuhan lab went from "mocked to maybe." People who dismissed the suggestion when it was associated with former President Trump and his competitive animus with respect to China are now willing to entertain it, since President Biden has told the Intelligence Community to look into it.
Deciding that information is unlikely to be credible because the source is in no position to know anything about the matter is quite another thing. That's not an argumentum ad hominem. For one thing, it doesn't warrant the questionable assertion to be false, merely not proven.
Meme speculation: the rise of the influencers, the madness of crowds, and the fall of the gatekeepers.
The Wall Street Journal reported a surge this week in some meme stocks, that is, a rapid rise in share prices driven by speculative chat in various social media. AMC Entertainment and BlackBerry, both popular with individual retail investors, are among the meme movers. Also surging some ten percent was Samsung Entertainment. Bloomberg associates that stock's movement with a casual Elon Musk tweet about the kiddie song "Baby Shark" (owned by Samsung Entertainment). Increased liquidity the US Federal Reserve introduced into American markets last year is seen as the root cause of the speculative jumps, with social media providing powerful amplification. GameStop's rise in January and the short squeeze it produced was the first famous instance of meme speculation.
This would seem to be an instance of the madness of crowds, the crosscurrent to that wisdom of crowds in whose invisible hand prediction markets, for example, puts their trust. Even beneficiaries of a speculative bubble can sometimes see the madness, and find it troubling. The Form 8K AMC filed with the US Securities and Exchange Commission this morning is a good example of corporate reticence: AMC knows that its performance has nothing to do with the surge in its market cap. "The market prices and trading volume of our shares of Class A common stock have recently experienced, and may continue to experience, extreme volatility, which could cause purchasers of our Class A common stock to incur substantial losses," the filing says. AMC explains:
"The market prices and trading volume of our shares of Class A common stock have recently experienced, and may continue to experience, extreme volatility, which could cause purchasers of our Class A common stock to incur substantial losses. For example, during 2021 to date, the market price of our Class A common stock has fluctuated from an intra-day low of $1.91 per share on January 5, 2021 to an intra-day high on the NYSE of $72.62 on June 2, 2021 and the last reported sale price of our Class A common stock on the NYSE on June 2, 2021, was $62.55 per share. During 2021 to date, daily trading volume ranged from approximately 23,598,228 to 1,253,253,550 shares. Within the last seven business days, the market price of our Class A common stock has fluctuated from an intra-day low of $12.18 on May 24, 2021 to an intra-day high of $72.62 on June 2, 2021, and we have made no disclosure regarding a change to our underlying business during that period, other than with respect to an additional financing."
That is, the price might as well be written on the running water, or written in the air. So in some respects, at least, influencers, even ones who are simply enjoying a goof (Mr. Musk is hardly likely to have offered "doo doo doo doo doo" as investment advice, for example, although one hesitates to be dogmatic) are giving traditional gatekeepers a run for their money. Since most of the things any of us knows are things we've taken more-or-less on trust from people whom we judge to know what they're talking about, this isn't a good sign for the future of misinformation.
Phishing for influence?
The US Attorney for the District of Massachusetts has charged Diana Lebeau, 21, of Cranston, Rhode Island, with attempted unauthorized access to a protected computer. Specifically, she's alleged to have sent phishing emails to candidates for election, their spouses, and members of their campaign staffs in which she posed as either a campaign manager or "Microsoft's Security Team." These would be familiar phishing techniques, and the emails looked like credential harvesting attempts. But what's odd about the affair is its apparent lack of motive. The US Attorney's media release explicitly makes the point that, "According to the charging document, Lebeau did not act with financial or political motive or to benefit any foreign government, instrumentality, or agent," which makes one wonder what in the world she was up to.
Saryu Nayyar, CEO of Gurucl, is one of those wondering. She emailed, "This is an unexpected phishing campaign outcome in that the charging document does not indicate Lebeau acted with financial or political motives to “foreign government, instrumentality, or agent”. Is that the only motive we care about? This appears to be a politically motivated attack albeit domestic. The political climate in this country is at an all-time high of toxicity and opposition. Extreme views incite extreme action. So what was this woman's attack motive? Inquiring minds want to know..."
Whatever was going on, delusion, influence-seeking, or art-for-art's sake, others wrote us to point out the risks of social engineering. Irene MO, Senior Consulting Associate with Aleada, wrote, "Phishing emails have become a popular hacking tool a long time ago and they are becoming more and more difficult to recognize. Politics provides a great opportunity since email campaigns are a popular and effective campaign communication and fundraising technique, making political parties vulnerable. It is important for every business or organization to educate their team about phishing campaigns and the risk posed by phishing emails."
And Rajiv Pimplaskar, CRO of Veridium suggested some countermeasures:
“Phishing attacks, such as whaling or whale phishing, are on the rise targeting wealthy, prominent or powerful individuals designed to extract information via social engineering and / or installing malware and conducting a potential secondary action. Such attacks are not just limited to a few individuals, and everyone is potentially a target. Studies show that social engineering and Man In The Middle (MITM) attacks can be highly effective even against many Multi-Factor Authentication (MFA) solutions. Organizations and users should be actively aware of this threat and should aggressively reduce their dependence on passwords. Verizon’s Data Breach Investigations Report (DBIR) shows that over 80% of data breaches rely on stolen credentials. Use of passwordless authentication methods such as Phone as a Token or FIDO2 can help. These strong authentication methods are regarded as “unphishable,” as they are based on public-key cryptography and establish a trusted relationship between the server and the user’s authenticator. Also, passwordless authentication methods offer less friction as compared to password-based alternatives and can improve user experience and productivity.”