It is no exaggeration to describe the federal government’s Misinformation and Disinformation Bill as a reworking of George Orwell’s Ministry of Truth in his fictional book, 1984.
The legislation is an overt attack on freedom of speech. NSW Solicitor-General Michael Sexton describes it as Orwellian legislation targeting ‘contestable political opinions on social media … based on the patronising assumption that members of the community cannot make a judgement about those opinions but must be protected from the obvious inadequacies of their [own] judgement’.
The legislation will give the Australian Communications and Media Authority (ACMA) the power to apply an elaborate system of codes and directions to force social media companies like Google, X (formerly Twitter), and Facebook to manage (that is, censor) digital content on their platforms, and will encourage people to complain about content with which they disagree.
The Bill says that ‘misinformation’ is content that is ‘reasonably verifiable as false, misleading or deceptive’ and that it does not need to actually cause ‘serious harm’, only that it be ‘reasonably likely’ to ‘cause’ or to ‘contribute to’ such harm (Clause 13 (1)).
‘Disinformation’ is said to be digital content that is ‘reasonably verifiable as false, misleading or deceptive’, and that ‘there are grounds to suspect that the person disseminating, or causing the dissemination of, the content intends that the content deceive another person, or the dissemination involves ‘inauthentic behaviour’, and the content is ‘reasonably likely to cause or contribute to serious harm’ (Clause 13 (2)).
If the definitions only covered information that could be regarded as reasonably verifiable as false that would be reasonably clear and ascertainable, However, also to include information that is ‘misleading or deceptive’, terms that rely on a point of view, makes findings of misinformation/disinformation more subjective and open to ideological bias.
‘Inauthentic behaviour’ is said to mean dissemination of content from ‘an automated system in a way that is reasonably likely to mislead an end-user’ (Clause 15). Presumably, this refers to content created using artificial intelligence.
These clauses are vague, open to very subjective interpretations and have low thresholds for complaints, like ‘intent to deceive’.
Causing harm?
Clause 14 sets out the ‘serious harm(s)’ from which to protect the public. Information about such matters may be subject to scrutiny and refused dissemination.
‘Harm to the operation or integrity’ of any ‘government or electoral or referendum process’. Does this mean that exposing a corrupt candidate, or corrupt voting practices, or opposing the Voice referendum could be considered as causing ‘serious harm’ to the ‘electoral or referendum process’? What about freedom of political comment?
Causing ‘imminent harm to the Australian economy, including harm to public confidence in the banking system or financial markets’. Does this mean that exposing misconduct by financial institutions could be considered as causing ‘serious harm’, even if such exposure led to an outcome like the 2017 Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry?
Causing ‘harm to public health’. Would this mean that public health would have been ‘harmed’ by claims that a drug like thalidomide was causing birth defects, or by parents in the 1970s disputing the psychiatric profession’s claims that schizophrenia was caused by bad parenting?
Content with ‘significant and far-reaching consequences for the Australian community or a segment of the Australian community’ or that has ‘severe consequences for an individual in Australia.’ Really, how are social media companies to interpret this?
‘Vilification of a group in Australian society distinguished by race, religion, sex, sexual orientation, gender identity, intersex status, disability, nationality or national or ethnic origin’, or vilification of an individual because of their beliefs. There is no definition of ‘vilification’, only a reference in the Bill’s Explanatory Memorandum to the Northern Territory Anti-Discrimination Act 1992, which along with the federal Racial Discrimination Act, has the lowest bar for vilification in Australia – ‘reasonably likely to offend’.
Not only is the absence of a definition of ‘vilification’ a glaring omission, its appearance in this Bill looks like a backdoor way to impose a federal form of anti-vilification legislation after attempts to achieve such legislation by Attorney-General Mark Dreyfus appear to have hit a brick wall of resistance.
Yet, failures by digital media platforms to interpret these clauses according to how ACMA may interpret them, and subsequent failure to prevent what ACMA decides is serious harm, will risk enormous fines. Invariably, before any active intervention by ACMA, digital-platform providers will censor and suppress content, to avoid any action by ACMA and serious penalties.
Definitions are lacking, harms have low, subjective thresholds, clauses in the Bill are wide open to interpretation – all when drawing a clear line between truth and falsehood is not always simple, and there can be legitimate differences of opinion as to how content (facts) be interpreted.
The Bill won’t protect Australians, it will undermine the freedom of speech and belief that is the necessary condition for a properly functioning democracy.
Patrick J. Byrne is former national president of the National Civic Council and Terri M. Kelleher is former national president of the Australian Family Association.