The Online Safety Bill (OSB) has been presented to the public as an attempt to protect children from online grooming and abuse and to limit the reach of terrorist propaganda.
This, however, does not seem to be its primary focus. The real objective of the proposed Online Safety Act (OSA) appears to be narrative control.
In order to understand where the legislation is heading, first we have to interpret it. Even seasoned legal experts have struggled to get to grips with it. For a reasonably voluminous piece of legislation (the Bill alone runs to 134 pages), it is almost completely devoid of any relevant definitions.
The proposed Act, as it currently stands in Bill form, is an abstract jumble of ill-defined and seemingly meaningless terms, requiring practically limitless legal interpretation before anyone can even begin to consider what it means. A thorough breakdown of this mess has been attempted by CyberLeagle:
The draft Online Safety Bill is nothing if not abstract […] [it] resolutely eschews specifics […] The detailing of the draft Bill’s preliminary design is to be executed in due course by secondary legislation, with Ofcom [broadcasting regulator] guidance and Codes of Practice to follow.
In other words, the OSB is full of references to legal concepts and terms which no-one can decipher—including the Members of Parliament who will vote on it. Once it becomes law, it will then be adapted through secondary legislation and Ofcom regulation, as yet unwritten.
Unless stopped, MPs will be creating a law that has no defined parameters. This will allow the Government to insert whatever objectives they wish after it is enacted, as they have done with the Coronavirus Act.
The use of secondary legislation to give effective meaning to the OSA will greatly reduce parliamentary scrutiny.
MPs can reject secondary legislation but they can't amend it. Therefore, the scope of the OSB can be continually amended and subsequently resubmitted until the Government gets whatever it wants.
This is a complete betrayal of the democracy most people imagine they live in. It is difficult to envisage how the opacity of the OSB is anything other than deliberate. It suggests a plan to hide legislation from scrutiny before it is made law.
This raises the suspicion that the Government knew that openly stating their full intentions for what will presumably become the Online Safety Act (OSA) would elicit stiff opposition from Parliament and the public. It appears the Government has consequently attempted to obscure that intent.
However, we can still discern the Government's objectives if we consider both the content of the OSB, the arguments presented in support of it, and the aims of those making them. When we do, what is revealed is so deeply undemocratic that it lends further credence to the view that this legislation has been misrepresented to Parliament.
All we can hope is that MPs will inform themselves and try to understand the nature of this dictatorial bill before they pass it into law. If they don't yet understand how pernicious the proposed Act is, they need their constituents to inform them.
The Online Safety Act — A Wholly Unconvincing Argument
In the OSB Explanatory Notes, the Government states:
The Online Safety Bill establishes a new regulatory regime to address illegal and harmful content online, with the aim of preventing harm to individuals.
We are immediately faced with two separate, undefined concepts: illegal content and harmful content. While illegal content is also harmful content, harmful content is not necessarily illegal content and is supposedly distinct from it. However, we should note that, for the purposes of the proposed Act, the aim with regard to sanctioning both "illegal" and "harmful" content appears to be one and the same: preventing harm to individuals.
This will become crucial later on. Please bear it in mind as we progress.
Leaving aside the lack of definition, we find that three subcategories of harmful content are covered in the OSB: illegal content, content that is harmful to children, and content that is harmful to adults.
There are also two types of services in scope of the legislation. User-to-user services means the social media platforms, like Facebook and Twitter; search engines means Google, Bing, etc. We are also told that some of these online service providers will meet the threshold for being a Category 1 service. That threshold isn't defined.
We then get to the claimed justification for the imminent Online Safety Act:
The prevalence of the most serious illegal content and activity online is unacceptable, and it threatens the United Kingdom’s national security and the physical safety of children.
The Government claims that, under the proposed Act, it will be illegal content and activity that becomes unacceptable. This is clarified further as the Government states that the regulator will have:
Powers in relation to terrorism content and child sexual exploitation and abuse (CSEA) content.
This, then, suggests itself as the content which the Government considers "illegal", as indeed it undoubtedly should be. In regard to potential harmful content, the Government states:
The Bill also imposes duties on such providers in relation to the protection of users’ rights to freedom of expression and privacy. Providers of user-to-user services which meet specified thresholds (“Category 1 services”) are subject to additional duties in relation to content that is harmful to adults.
In addition to illegal content, the Government claims that the major social media platforms will also have—at this stage unspecified—"duties" in relation to content that is deemed to be "harmful to adults". The Government claims it will also protect freedom of expression (including free speech).
As we shall see, this is a self-contradictory claim by the Government. The duties that will be imposed by this legislation will obliterate online freedom of speech and expression.
This has caused consternation among some legal experts. Ultimately, the OSB states that some of this content harmful to adults must be taken down by the user-to-user services. What's more, the OSB creates a powerful role for the responsible Secretary of State (as advised by the relevant Ministry), who will effectively decree what content is to be taken down.
Legal professionals have struggled to reconcile how this can possibly be done for content that is not illegal, while protecting important freedoms. CyberLeagle notes:
The Secretary of State is able (under a peculiar regulation-making power that on the face of it is not limited to physical or psychological harm) to designate harassing content (whether or not illegal) as priority content harmful to adults […] Content is non-priority content harmful to adults if its nature is such that 'there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities' […] [T]he ‘non-priority content harmful to adults’ definition […] appears to have no discernible function in the draft Bill […] if it does have an effect of that kind, it is difficult to see what that could be intended to be.
Perhaps CyberLeagle can see it but just can't believe it. There is no commitment in the OSB to protect freedom of expression or speech. Non-priority content "harmful to adults" and priority content "harmful to adults" are indistinguishable in the wording of the Bill, and both are treated the same as "illegal content". Once this is understood, the intent of the OSB becomes evident.
Before we address the touted excuse that this legislation is needed to tackle online child sexual exploitation and abuse (CSEA)—and it is a mere excuse—we first need to be clear about what the UK Government means by national security. It does not just mean terrorism.
National security is a broad umbrella term for a whole raft of policy areas. Of course, it is not defined in the offered Bill. We must look elsewhere.
In the 2018 National Security Capability Review, the term covered terrorism, extremism and instability, the erosion of the rules-based international order, the undermining of democracy and "consensus"; technology, technological development, cybersecurity, the economy, the financial system, public health and the environment (climate change). This is by no means an exhaustive list, but you get the idea of what national security means in government circles.
Therefore, the aim of the OSB is to prevent illegal activity online—but that activity refers to anything which the Government judges to pose a threat relevant to any of these national security policy areas. To reiterate, the aim of the legislation is to treat "illegal" and "harmful" content identically.
With regard to CSEA, the Government has used the propaganda technique known as card-stacking. It has presented something everyone can agree upon—that child abuse is wicked—in order to induce people, including our representatives in Parliament, to believe that the legislation is necessary. It most certainly is not.
Everyone, other than paedophiles, wants to protect children online. The OSA, if enacted, will do nothing to assist in this effort.
Paedophiles already have been and are being prosecuted for online abuses in increasing numbers. Law enforcement already has the legal power and technological capability to detect and arrest online paedophiles. Resource shortages are the problem, and the approaching Act does not address them.
The UK Government freely admits that online child sexual abuse and exploitation (CSEA) overwhelmingly occurs on the dark web. Yet the supposed solution advanced in the OSB is to place a responsibility upon the social media giants of the surface web (unhidden internet) to police everyone's social media activity.
If the Government is so worried about child grooming on the Category 1 social media platforms, then a sensible start to tackling the problem would be for it to insist that Facebook—by far the biggest platform—remove its dark web site. As long as it remains online, child predators will continue to access Facebook via the dark web route. This makes catching them much harder for law enforcement. Again, the OSB offers nothing on this front to address something which genuinely is an unacceptable risk.
The Government claims that the legislation is intended to tackle online CSEA—yet it doesn't. Government has avoided taking the steps that would work towards that goal, and has instead simply appended a vacuous claim to the narrative that it is selling. The Government is offering child protection, but delivering censorship.
The other supposed no-brainer of a talking point used to market the OSB is the assertion that it is designed to tackle terrorist propaganda and so-called online radicalisation. To date, the British state has shown little interest in removing real terrorist propaganda, which has been widely circulated online for nearly two decades. YouTube (owned by Google) is among the many platforms that openly host terrorist-related material.
The Government and its agencies have long had both the authority and the ability to remove online terrorist incitement, but haven't. To lobby us now that they must to do something about it is not credible. Clearly, this is just another sales technique to promote the legislation.
Following the murder of Sir David Amess MP, the political class was quick to call for "David's Law"—as the Online Safety Bill has been dubbed—to tackle the alleged problem (22:18) of people using the internet anonymously. This is based upon the spurious idea that it is internet use which leads to radicalisation. There is no evidence to support that claim.
The UK Government is presenting us with an absurd notion. It seems to be suggesting not only that paedophiles and terrorists will migrate to using the surface web—instead of what is, for them, the much safer dark web; but that they will also meekly register under their real names if told to do so by the authorities.
There is no reason to believe that the OSB is designed to stop the sharing of either terrorist or CSEA content. It won't stop criminals of any kind from using the dark web to access illegal content or to commit crimes on social media. This is especially the case while the social media platforms maintain the means to enable terrorists and child abusers to do so undetected. Nor will it force criminals of any kind to suddenly to obey the law.
The Government could potentially stop the sharing of terrorist-related content on platforms like YouTube, but to do so requires enforcement. There has been none to date, so it isn't at all clear how this proposed legislation will end such dissemination.
The Online Safety Act — The Entirely Dependent Regulator
The UK Government's Online Harms White Paper, which led to the Online Safety Bill, stated that an independent regulator would be appointed. The OSB speaks of the same office, and Britain's broadcasting regulator Ofcom was subsequently appointed as the Online Safety Regulator. Despite the claim in the adjective, Ofcom is in no way independent from government, nor from a plethora of commercial interests.
Not only is Ofcom "directly accountable" to the UK Parliament, it is funded by many of the broadcasters it currently regulates and it is "sponsored" by the Department of Digital, Culture, Media and Sport (DCMS), among other government agencies and departments.
Ofcom is currently the regulator for the video-sharing platforms (VSPs) where terrorist-related material is openly hosted. Under the OSB, it is to receive additional counter-terrorism powers, but, since it hasn't shown any inclination to use those it already possesses, why should anyone imagine it is about to start wielding any new ones?
When we consult the Ofcom board's declared register of interests, any tenuous notion of independence evaporates. Of the 40 board members in all (spread between Ofcom's executive, content and advisory boards), eleven have financial ties to the BBC and 26 are either currently, or were formerly, in government roles.
Other interests represented by Ofcom board members include Google, GlaxoSmithKline (via the Wellcome Trust), Akamai (the global cybersecurity and content hosting giant), numerous media consultancies and other commercial enterprises who stand to profit from Ofcom "regulations".
The only people Ofcom appear to be independent from are the public. This is crucial, because despite the allegation that the OSA will impose a duty on the tech giants to operate safely, it is actually about taking down individual pieces of content posted by ordinary users.
During the DCMS Joint Parliamentary Scrutiny Committee hearings, written evidence was submitted from the DCMS, which stated:
The Bill is entirely centred on systems and processes, rather than individual pieces of content […] The focus on robust processes and systems rather than individual pieces of content has a number of key advantages […] a focus on content would be likely to place a disproportionate burden on companies […] companies would be incentivised to remove marginal content. The focus on processes and systems protects freedom of expression […] The regulator will not make decisions on individual pieces of content.
Yet, while the Government maintains that it is not attempting to establish control over content, the overwhelming focus of the text of the OSB is upon nothing but content. The Governments appear to be foisting the responsibility for policing the internet onto the social media giants. Simultaneously, the Government will define the "duties" which will determine how the the "user-to-user" services are to manage that content and to "take down" content that the Government doesn't approve of.
The Online Safety Act — How the Censorship will Work
Section 98 of the Online Safety Bill (OSB) discusses the duties pertaining to disinformation. The concept of disinformation is mentioned solely in this brief section. Yet this is the primary focus of the whole Bill. It does nothing to combat either CSEA or terrorism—but it does establish the basis for censorship of cyberspace.
The Government will identify the sorts of disinformation that preoccupy it by means of secondary legislation and by stipulating Ofcom regulations at a later date. But first, the OSB attempts to bury this draconian censorship agenda in plain sight.
The OSB leaves disinformation and misinformation entirely undefined. Instead, these concepts are enveloped by the term content that is harmful to adults.
Section 98 of the OSB places a duty on Ofcom, following the obtaining of advice from Ofcom's own newly-established internal committee, to specify how user-to-user and search engine services should "deal with disinformation and misinformation". Then, Section 98 (4) (b) notes that Ofcom has a power "under section 49 to require information of a kind mentioned in subsection (4) of that section, so far as relating to disinformation and misinformation".
The OSB doesn't mention either the term disinformation or misinformation in Section 49 (4) at all. It appears to refer to nothing more than an Ofcom duty to require services for the purposes of preparing an annual transparency report. However, Section 49 (4) (e) stipulates that service providers must also provide:
Information about systems and processes which a provider uses to deal with illegal content, content that is harmful to children and content that is harmful to adults, including systems and processes for identifying such content, and — (i) in the case of a user-to-user service, taking down such content.
References to claimed disinformation and misinformation are tucked neatly away inside the broader notion of content that is harmful to adults. This leaves the reader, such as scrutinising MPs, wrongly believing that disinformation isn't within scope. In reality, that is what "taking down such content" specifically refers to.
Section 137 (2) clarifies the position as follows:
References in this Act to “taking down” content are to any action that results in content being removed from a user-to-user service or being permanently hidden so users of the service cannot encounter it (and related expressions are to be construed accordingly).
Once the Online Safety Act is established in law, the Secretary of State or Ofcom can, at their discretion, label the types of content they don't like as content that is harmful to adults. Category 1 providers will then have a duty to establish systems to take down such content.
Section 46 is entitled Meaning of "content that is harmful to adults", etc. The Secretary of State has complete control over the censorship agenda. Pursuant to Section 46 (2) (b), any content that is claimed to be disinformation or misinformation will be targeted for removal wherever it is:
[o]f a description designated in regulations made by the Secretary of State as priority content that is harmful to adults.
This confirms that the Government intends to use secondary legislation to list the topics for censorship after the OSA is enacted. In addition, the big tech platforms will also be empowered to police freedom of speech. Section 46 (3) declares that content is also harmful to adults whenever the provider:
[…] has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities.
If the user-to-user service doesn't want to breach the regulations, it will have to establish systems which err on the side of caution, pre-emptively banning content. These services will be compelled to ban and/or shadowban (make invisible to searches) all content which they suspect the Secretary of State or Ofcom might disapprove of.
Though it is comprehensively obscured within the OSB, there is no doubt as to the objective. The House of Lords Communications Committee expressed concern about the censorship outlined in the OSB. In response, the Government said:
Where harmful misinformation and disinformation does not cross the criminal threshold, the biggest platforms (Category 1 services) will be required to set out what is and is not acceptable on their services, and enforce the rules consistently. If platforms choose to allow harmful content to be shared on their services, they should consider other steps to mitigate the risk of harm to users, such as not amplifying such content through recommendation algorithms or applying labels warning users about the potential harm.
None of this is expressly stated in the OSB. The proposed Online Safety Act makes the sharing of information, such as the article you are currently reading, subject to state censorship via the major social media platforms. The decision as to whether or not it is deemed disinformation or misinformation is entirely that of the Government, of its appointed arm's-length quango—Ofcom—or of multinational corporations.
All of the powers in the Bill allegedly designed to tackle illegal content are equally applicable to content which does not cross the criminal threshold. The determination of the potential harms caused will also consider whether the offending content threatens the United Kingdom’s national security.
Our ability to question the consensus on climate change, asking questions about vaccine safety and efficacy, or questioning government fiscal policy are all to be scrutinised by the state and its industry partners. If they don't like the look of it, they will take it down.
There is nothing good about this legislation. No government should have the power simply to censor the people and stop them freely sharing information and ideas. Such a government has no interest in representative or any other kind of democracy. It is a tyranny. Yet that is the intent of the proposed Online Safety Act.
It is imperative everyone who reads this contact their MP and politely but firmly demand that they vote against this dictatorial legislation. This kind of bill makes a mockery of the lives lost that we have just remembered with such reverence.
The Online Safety Act exemplifies what they fought against and died to protect us from. It is an act of betrayal.