At a glance.
- EU approves Digital Markets Act.
- TSA tweaks cybersecurity directives for railroads.
- Defamation law in Australia.
- AI guidelines from the US Defense Innovation Unit.
EU Parliament committee approves Digital Markets Act.
Bloomberg reports that on Tuesday the European Parliament’s Committee on the Internal Market and Consumer Protection approved a draft of the Digital Markets Act, a new law restricting top US tech companies’ operations in the EU. The legislation pertains to companies with “gatekeeper” status, which include Amazon, Facebook, Google, Microsoft, and Apple. The legislation targets competition by requiring that a companies’ messaging or social media apps must be interoperable so that users don’t feel they must use a particular platform just to connect with their friends. The law will also ban behavioral targeting of ads to children. If found in violation of the law, a company could be fined up to 20% of its global annual sales. European Parliament lead negotiator Andreas Schwab explained, “The current competition rules are not enough. They allow the digital giants to fully exploit their market power and impose their own rules on the markets. The Digital Markets Act will ban these unfair practices,” Euractiv reports. The Parliament will negotiate the law with EU member states and Commission early next year.
TSA tones down new rail transit directives.
The US Transportation Security Administration has decided to modify upcoming cybersecurity directives for railroad and rail transit entities, softening deadlines and issuing recommendations rather than requirements. The Federal News Network explains that the new directives are part of TSA’s recent efforts to improve cybersecurity in the transportation sector After the cyberattack on Colonial Pipeline, the TSA received criticism from industry leaders and Republican lawmakers for issuing emergency directives without industry input. TSA made sure to consider industry feedback in creating these rail transit directives, extending the incident reporting window from twelve to twenty-four hours and expanding the deadline to complete an incident response plan from sixty days to six months. As well, unlike the Pipeline requirements, these directives will be a public document.
Australian official wins social media defamation case.
On Wednesday, Australia’s defense minister Peter Dutton won a defamation case in which he sued a refugee advocate named Shane Bazzi over a tweet in which he called Dutton a “rape apologist” for comments he’d made about the sexual assault of a former government staff member. The New York Times reports that critics are concerned the verdict sets an alarming precedent for limiting ordinary citizens’ speech on social media. Australia is known for having tight defamation laws, but even still, this case stands out for penalizing someone with no political standing. “It’s consistent with the theme that this government is content in taking a very heavy-handed approach to online speech that it doesn’t like,” said Michael Douglas, a senior lecturer in private law at the University of Western Australia. Dutton himself has been very vocal about his desire to control defamatory content on social media, and Prime Minister Scott Morrison recently described social media as a “coward’s palace.” As Douglas explained, “Cases like these are a warning that, unless something changes, we’re going to see more and more cases like this, and every Australian should tread carefully before they do a quote retweet and call a politician a name.”
Defense Innovation Unit publishes new AI directives.
The US Defense Innovation Unit, responsible for helping defense organizations use commercial innovation, has released new directives regarding the implementation of the Pentagon’s “Responsible AI Guidelines” in its commercial prototyping and acquisition operations. The guidelines, the product of fifteen months of consultation with AI experts, are intended to help the unit adhere to the five principles of ethical AI use recommended by the Defense Innovation Board in 2020. John Stockton, co-founder of Quantifind, one of the companies providing input on guidelines, told C4ISRNET, “These guidelines show promise for actually accelerating technology adoption, as it helps identify and get ahead of potentially show-stopping issues.” The purpose of the guidelines includes clarifying end goals and risks of AI programs, increasing confidence in AI standards, and improving evaluation, prototyping, and adoption methods.