We need misinformation law
The case for is better than the case against
What people are missing in the current welter of criticism of the Australian government’s misinformation legislation is the fact that authorities are at every moment deciding what people will and won’t see. Right now. Those authorities are algorithms, and they are filtering content, choosing what to remove, what to ignore and what to promote in an unceasing blur. There is no functional difference between neglect and active censorship in a world of billions ruled by algorithm: no one hears what you are saying. In such an environment, “freedom of speech” means little.
There are problems with the legislation and I know why people don’t like the idea of any authority deciding what is true and what is false.
But I believe the case for regulatory intervention is better than the case against.
What’s all this about, then?
The legislation is half-way through federal parliament - it still has to pass the Senate, which may be tough - and is an amendment to the Broadcasting Services Act. Contrary to some of the commentary, it doesn’t require the government to determine truth, and it doesn’t empower the government to take down particular pieces of content.
Here are the main elements of the new law:
The Australian Communications and Media Authority (ACMA), an independent government agency that already deals with broadcasters, is responsible for administering the system
Internet platforms like Facebook, YouTube, Tiktok and X are required to establish industry codes that transparently deal with misinformation, flagging/downweighting/removing content that is:
Verifiably false or deceptive, and
Likely to cause serious harm
News, satire, academic work, art and religion are all exempt from the misinformation filter
ACMA has to be satisfied the codes and enforcement are sufficient, and has powers to enforce compliance (fines, forcing its own codes on platforms)
Platforms must cooperate with ACMA in providing data, which may also be used for academic research on misinformation
Anne Twomey also wrote an excellent piece in The Conversation summarising the situation, albeit with a different conclusion from mine. The headline is “How the federal government’s misinformation bill might impede freedom of speech”. Impeding some kinds of speech is precisely the purpose.
The case against
Accusations of government censorship are melodramatic. ACMA operates independently of government, and has a history of regulating Australian broadcasters. There is no absolute requirement for content removal unless that content is inauthentic disinformation (ie enemy states, bots or other bad actors spreading harmful lies).
The way the legislation is written makes it clear that ACMA’s role is graduated and its assessment of compliance with industry codes will be general.
But there are legitimate concerns.
There is confusion in the explanatory memo that accompanies the legislation about the status of opinion. Opinion is not exempt, I assume because serious and harmful misinformation can easily shelter under that categorisation. The memo should make it explicit that expressions of preference and judgement on their own cannot be misinformation, given they are reporting the mental state of the owner. For example, “I believe the government are a pack of donkeys” reports my disapproval. This is not a matter for anyone external to me, and the metaphor must not be taken literally.
There are worries the legislation may define its target digital services too broadly and end up imposing regulation on small players with no impact on misinformation, eg hobby sites and review sites.
Critics have pointed out that all the heavy lifting is left up to the platforms themselves. This is necessary for political and practical reasons. ACMA doesn’t want to be in the weeds of individual pieces of content, and has no ability to moderate at this level in any case. So the platforms will police themselves, according to standards developed with other platform companies. The problem here is that to avoid heavy fines - up to 5% total revenue - the platforms will be incentivised to remove anything vaguely suspicious. This could lead to excessive suppression of legitimate content.
I can also see the risk of the ACMA invigilation role snowballing when dealing with platforms that are fierce free speech advocates, such as X. It is easy to imagine a recalcitrant platform forcing ACMA into the most extreme version of graduated response in the legislation. This would not be ideal.
Finally, there is the problem of truth. What is correct changes as our knowledge and beliefs about the world change. This is the basis of science and the open society. Many people are not happy to leave this up to digital platforms to adjudicate. I get it, but I think they are already acting as judges, though their power to remove, promote or ignore content. Only, at the moment they are unconstrained by veracity. Platforms currently allow blatant misinformation to outcompete accuracy with damaging effects on society.
Why on earth do you support this?
I am in a unique position regarding the issue of regulating misinformation.
I act as the independent reviewer of transparency reports filed by digital platforms in Australia under the voluntary Misinformation and Disinformation Code. This is a code similar to those mandated in the legislation. Its current signatories are Meta, Google, Microsoft, Apple, TikTok, Adobe, Redbubble, Twitch and Legitimate.
The transparency reports document what each signatory has done in the preceding 12 months to combat mis/disinformation on their platforms. They include commentary, data, and information about policies. Working together with platforms, we have managed to improve the usefulness of the reports over three years. There is still a lot of room for improvement.
The signatories put effort into the reports, and have mostly listened to my requests for trended data and consistency in reporting. But what I observe is that the local Australian arms of global businesses are naturally subservient to their US headquarters, and their ability to respond to Australian requests is sometimes limited.
I think that regulatory backup for the Code is needed. Without the intervention of a regulator with teeth, the evolution of the industry-wide response to misinformation will stall. At the moment, recalcitrant members can leave or be removed without further penalty. For example, X was booted from the Code last year after it removed the ability for users to report political misinformation, and then refused to respond to investigators.
I don’t think the legislation is perfect, and it should be modified to address the most serious concerns. But, surprisingly to me, I support it in principle. It’s an unprecedented intervention for unprecedented technology that is having profound impacts on our societies. We don’t have to sit passive through the upheaval.
Have a great weekend
Hal



The moment "senders" are forced to put skin in the game, this issue goes away. The problem is the internet was a 4 layer protocol stack without any attending economic incentives or disincentives. In other words an economic stack. Plus it needed (needs) more layers. Risk is completely one-sided; namely on the receiver. We simply have to reverse engineer APSO (anonymous, permissionless, settlement-free, and open) to create a system that can be universally accessible to all. "Open and free" has failed.