Over the past decade, Western governments, increasingly concerned by the challenge to their geopolitical policies by independent media organisations and individual comment on social media platforms, have been putting huge resources into the building of a 'disinformation industrial complex'.
The disinformation industrial complex is the collaboration of government agencies, the mainstream media, so-called 'fact checkers' and non-governmental organisations who claim to be the arbiters of 'trust'.
Here we take you through the key events which led to the building of a censorship regime which is at the point of denying any questioning of political decisions, policies and actions which have brought us to the brink of global conflict.
David Cameron tells the United Nations General Assembly that something must be done to prevent extremism - not just violent extremism - from appearing online.
He makes it clear that he considers 'extremism' to include narratives on global events which are counter to his own.
Providing "information operations" for the British Army, 77th Brigade includes the following units (since renamed and restructured):
- Media Operations Group (Volunteers) (MOG) - Media Operations
- Security Capacity Building Team (SCBT) - Military Capacity Building
- 15 Psychological Operations Group (15 POG) - Psychological Warfare
- Military Stabilisation Support Group (MSSG) - Stabilisation and Conflict Prevention.
The Ministry of Defence claim is that 77th Brigade only operates abroad.
The Poynter Institute launches the International Fact-Checking Network which will set a "code of principles" for fact checking organisations. The IFCN is a register of "fact-checking" organisations which platforms such as Google and Facebook will use to vet potential suppliers when awarding "fact-checking" contracts.
Amber Rudd meets with representatives of Google, Microsoft, Twitter and Facebook. Following the meeting she says:
My starting point is pretty straightforward. I don’t think that people who want to do us harm should be able to use the internet or social media to do so. I want to make sure we are doing everything we can to stop this.
It was a useful discussion and I’m glad to see that progress has been made.
We focused on the issue of access to terrorist propaganda online and the very real and evolving threat it poses.
I said I wanted to see this tackled head-on and I welcome the commitment from the key players to set up a cross-industry forum that will help to do this.
In taking forward this work I’d like to see the industry to go further and faster in not only removing online terrorist content but stopping it going up in the first place. I’d also like to see more support for smaller and emerging platforms to do this as well, so they can no longer be seen as an alternative shop floor by those who want to do us harm.
The letter is signed by Hugh Milward, senior director, corporate, external and legal affairs, Microsoft UK; Nick Pickles, UK head of public policy and government, Twitter; Richard Allan, VP public policy EMEA, Facebook; and Nicklas Lundblad, VP public policy Europe, Middle East, Russia and Africa, Google.
It says:
Thank you for the constructive discussion today on the challenges that terrorism poses to us all.
We welcome the opportunity to share with you details of the progress already made in this area and to hear how the UK government is developing its approach in both the online and offline space. Our companies are committed to making our platforms a hostile space for those who seek to do harm and we have been working on this issue for several years. We share the government’s commitment to ensuring terrorists do not have a voice online.
We believe that companies, academics, civil society, and government all have an interest and responsibility to respond to the danger of terrorist propaganda online—and as an industry we are committed to doing more.
The German government approves a bill that punishes social networking sites if they fail to swiftly remove 'illegal' content such as 'hate speech' or defamatory 'fake news'.
German Justice Minister Heiko Maas says that companies providing online platforms are responsible for removing hateful content. He said the new bill will not restrict freedom of speech. He says:
Just like on the streets, there is also no room for criminal incitement on social networks ... The internet affects the culture of debate and the atmosphere in our society. Verbal radicalization is often a preliminary stage to physical violence.
Europol’s European Counter Terrorism Centre (ECTC) hosts its first high-level Conference on Online Terrorist Propaganda. Over 150 participants gather at Europol’s headquarters in The Hague to discuss a wide variety of related topics.
Participants include members of the ECTC Advisory Group on Terrorist Propaganda, representatives of the EU Commission and EU Council, academia and law enforcement practitioners from Europe and the US.
Damian Collins MP, chair of the Digital, Culture, Media and Sport Select Committee, asks Facebook to tackle fake news in the run-up to the UK general election on 8 June.
The ads are published in The Times, The Guardian and The Daily Telegraph, amongst others, and list ten "things to look out for" when deciding if a story is genuine, including checking the article date and website address, as well as making sure it isn't intended to be satire.
Facebook says it has already removed "tens of thousands" of fake accounts and that it has set up systems to monitor the repeated posting of the same content.
Theresa May leads calls at the G7 meeting in Sicily to set up an industry-led forum to deal with 'extremist' content online. The official statement following the meeting says:
The G7 calls for Communication Service Providers and social media companies to substantially increase their efforts to address terrorist content ... We encourage industry to act urgently in developing and sharing new technology and tools to improve the automatic detection of content promoting incitement to violence, and we commit to supporting industry efforts in this vein including the proposed industry-led forum for combating online extremism.
Theresa May travels to Paris to meet French President Emmanuel Macron. There she continues to press for an industry-led 'forum' to deal with 'unacceptable' content online. She says:
The counter-terrorism cooperation between British and French intelligence agencies is already strong, but President Macron and I agree that more should be done to tackle the terrorist threat online.
In the UK we are already working with social media companies to halt the spread of extremist material and poisonous propaganda that is warping young minds.
And today I can announce that the UK and France will work together to encourage corporations to do more and abide by their social responsibility to step up their efforts to remove harmful content from their networks, including exploring the possibility of creating a new legal liability for tech companies if they fail to remove unacceptable content.
We are united in our total condemnation of terrorism and our commitment to stamp out this evil.
Facebook, YouTube, Twitter and Microsoft release a joint statement which says:
Today, Facebook, Microsoft, Twitter and YouTube are announcing the formation of the Global Internet Forum to Counter Terrorism, which will help us continue to make our hosted consumer services hostile to terrorists and violent extremists.
... The new forum builds on initiatives including the EU Internet Forum and the Shared Industry Hash Database; discussions with the U.K. and other governments; and the conclusions of the recent G7 and European Council meetings. It will formalize and structure existing and future areas of collaboration between our companies and foster cooperation with smaller tech companies, civil society groups and academics, governments and supra-national bodies such as the EU and the U.N.
The statement highlights cooperation with 'partners' such as the Center for Strategic and International Studies, the Anti-Defamation League and the Global Network Initiative "to identify how best to counter extremism and online hate, while respecting freedom of expression and privacy."
The Global Internet Forum holds its first meeting in San Francisco, where "representatives from the tech industry, government and non-governmental organisations are coming together to share information and best practices about how to counter the threat of terrorism online."
In her comments about the meeting, British Home Secretary once again reminds us that 'terrorism' includes 'extremism', echoing David Cameron's words of three years previously:
The World Socialist Web Site reports that:
New data compiled by the World Socialist Web Site, with the assistance of other Internet-based news outlets and search technology experts, proves that a massive loss of readership observed by socialist, anti-war and progressive web sites over the past three months has been caused by a cumulative 45 percent decrease in traffic from Google searches.
The World Socialist Web Site has obtained statistical data from SEMrush estimating the decline of traffic generated by Google searches for 13 sites with substantial readerships. The results are as follows:
* wsws.org fell by 67 percent
* alternet.org fell by 63 percent
* globalresearch.ca fell by 62 percent
* consortiumnews.com fell by 47 percent
* socialistworker.org fell by 47 percent
* mediamatters.org fell by 42 percent
* commondreams.org fell by 37 percent
* internationalviewpoint.org fell by 36 percent
* democracynow.org fell by 36 percent
* wikileaks.org fell by 30 percent
* truth-out.org fell by 25 percent
* counterpunch.org fell by 21 percent
* theintercept.com fell by 19 percent
Andy Pryce takes part in an event called ‘DEMOCRACY AND PROPAGANDA: Can independent media defend universal values?’, a two day event held in the Hilton Hotel, Kyiv, on "fake news". The event is organised jointly by the European Endowment for Democracy and the EU Eastern Partnership, which, it turns out, is an FCO programme that "works to counter and reduce the effect of destabilising disinformation”.
Reports of censorship by social media companies become a regular occurrence:
Erica Anderson, Partnerships Manager at Google News Lab, announces that Google will partner with the International Fact-Checking Network (IFCN) at the Poynter Institute to “fact-check” news stories that appear in search results.
Anderson says:
With so much information available around the clock and across devices, being able to understand at a glance what’s true and what’s false online is increasingly important.
The Poynter Institute for Media Studies is openly funded by Soros’ Open Society Foundations.
In a statement, Twitter says:
This decision was based on the retrospective work we've been doing around the 2016 U.S. election and the U.S. intelligence community’s conclusion that both RT and Sputnik attempted to interfere with the election on behalf of the Russian government. We did not come to this decision lightly, and are taking this step now as part of our ongoing commitment to help protect the integrity of the user experience on Twitter.
Eric Schmidt, the Executive Chairman of Google’s parent company Alphabet, says during a Q&A session at the Halifax International Security Forum in Canada that the company will “engineer” specific algorithms for RT and Sputnik to make their articles less prominent in search results.
He says:
We are working on detecting and de-ranking those kinds of sites – it’s basically RT and Sputnik ... We are well of aware of it [Russian 'propaganda'], and we are trying to engineer the systems to prevent that [their content appearing high up in search results]. But we don’t want to ban the sites – that’s not how we operate.
Schmidt claims that he is “very strongly not in favour of censorship,” but says that he has faith in “ranking”.
The Trust Project describes itself as "a consortium of top news companies" including the dpa news agency, The Economist, The Globe and Mail, Hearst Television, the Independent Journal Review, Haymarket Media, the Institute for Nonprofit News, Italy’s La Repubblica and La Stampa, Mic, Reach Plc and The Washington Post.
Search engines and social media companies are described as "external partners".
They say they aim to produce "trust indicators": standardised disclosures "that provide clarity on a news organization’s ethics and other standards for fairness and accuracy, a journalist’s background, and the work behind a news story", and which can be fed into search engines so that "quality news" can be brought to the top of search results.
'Experts' from Google, Facebook and Twitter appear before Yvette Cooper's Home Affairs Select Committee. The main concern of the inquiry seems to be online abuse of Members of Parliament.
In the course of giving evidence, all three platforms admit to having added staff to their censorship teams. Facebook, for example, admits to having added 3000 staff to its 'community operations' team in the previous six months, plus 20,000 staff to their 'safety and security' team. Twitter and Google have recruited similar numbers into their equivilent teams.
On 11 January 2018, the House of Lords debates a motion moved by Baroness Kidron (Crossbench) on the “role played by social media and online platforms as news and content publishers”.
The issues debated include the role of social media and online platforms as disseminators of inaccurate or misleading information (widely referred to as ‘fake news’); the use of such applications by state and non-state actors to influence conflict narratives, sometimes through the posting of illegal content; the role of these networks in preventing online bullying and harassment; and the responsibility held by such platforms when copyrighted content is inappropriately uploaded and shared via their services.
At the heart of the issue, though, is whether social media platforms are equivalent to the 'town square' where individuals are responsible for what they say, or equivalent to a publisher who has ultimate responsibility for what is published on behalf of its contributors.
"Technologies like the internet were developed with a philosophy that connecting us together would improve people’s lives," she says, "And in many ways they have. But so far, that hasn’t been completely true for everyone."
She continues:
Just this week, a survey in the UK has found that 7 in 10 people believe social media companies do not do enough to stop illegal or unethical behaviour on their platforms, prevent the sharing of extremist content or do enough to prevent bullying.
The loss of trust is hugely damaging. And it is in all our interests to address it.
... And underpinning all of this is our determination to make the UK a world leader in innovation-friendly regulation.
Regulation that will make the UK the best place to start and grow a digital business – but also the safest place to be online.
In a speech to the House of Commons, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announces government support for mainstream media. He says:
Today in a world of the Internet and clickbait, our press face critical challenges that threaten their livelihood and sustainability - with declining circulations and a changing media landscape.
... In 2015, for every 100 pounds newspapers lost in print revenue they gained only 3 pounds in digital revenue.
... Action is needed. Not based on what might have been needed years ago - but action now to address today’s problems.
... Our new Digital Charter sets out the overarching programme of work to agree norms and rules for the online world and put them into practice.
... And our review into the sustainability of high quality journalism will address concerns about the impact of the Internet on our news and media.
Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announces that Dame Francis Cairncross will lead the government review into the sustainability of the mainstream media.
Speaking at the Oxford Media Convention, Matt Hancock says:
There are a multitude of challenges facing our media today. Falling newspaper circulations, declining advertising revenues, changing consumption and wholesale disinformation.
Trusted, sustainable, high quality media is needed now more than ever.
Dame Frances Cairncross will bring her experience in journalism and academia to tackle these issues with a view to examine the press and protect the future of high quality journalism.
Also known as the 'fake news unit', the Rapid Response Unit is given an initial six months' funding. It brings together a “team of analysts, data scientists and media and digital experts,” armed with cutting-edge software, to “work round the clock to monitor online breaking news stories and social media discussion.”
According to the RRU's head, Alex Aiken:
The unit’s round the clock monitoring service has identified several stories of concern during the pilot, ranging from the chemical weapons attack in Syria to domestic stories relating to the NHS and crime.
For example, following the Syria airstrikes, the unit identified that a number of false narratives from alternative news sources were gaining traction online. These “alt-news” sources are biased and rely on sensationalism rather than facts to pique readers’ interest.
Due to the way that search engine algorithms work, when people searched for information on the strikes, these unreliable sources were appearing above official UK government information. In fact, no government information was appearing on the first 15 pages of Google results. We know that search is an excellent indicator of intention. It can reflect bias in information received from elsewhere.
The unit therefore ensured those using search terms that indicated bias – such as ‘false flag’ – were presented with factual information on the UK’s response. The RRU improved the ranking from below 200 to number 1 within a matter of hours.
Andy Pryce, head of the FCDO's Counter Disinformation and Media Development programme, speaks at the 2018 Brussels Disinfolab, hosted by The EU Disinfolab and the Atlantic Council. He takes part in a panel discussion entitled "How to Fight Back - Lessons and Recommendations".
His panel discussion is the only one which is not live streamed.
Facebook reports that it has taken down 32 'suspicious' pages and accounts that appear to have been run by 'leftists' and 'minority activists'.
Some within the US Administration claim the pages were probably run by 'Russian agents'. Facebook says it does not know for sure.
Google launches the Google News Initiative to "elevate accurate, quality content and stem the flow of misinformation and disinformation".
Andrew Parker says in his speech:
Age-old attempts at covert influence and propaganda have been supercharged in online disinformation, which can be churned out at massive scale and little cost. The aim is to sow doubt by flat denials of the truth, to dilute truth with falsehood, divert attention to fake stories, and do all they can to divide alliances.
Bare-faced lying seems to be the default mode, coupled with ridicule of critics.
The Russian state’s now well-practised doctrine of blending media manipulation, social media disinformation and distortion with new and old forms of espionage, high levels of cyber attacks, military force and criminal thuggery is what is meant these days by the label ‘‘hybrid threats’’.
... We are committed to working with them [social media companies] as they look to fulfil their ethical responsibility to prevent terrorist, hostile state and criminal exploitation of internet carried services: shining a light on terrorists and paedophiles; taking down bomb making instructions; warning the authorities about attempts to acquire explosives precursors.
This matters and there is much more to do.
Facebook announces that it is “partnering” with the Atlantic Council, effectively NATO's think tank, “to combat election-related propaganda and misinformation from proliferating on its service.”
Facebook becomes a top donor to the Atlantic Council, alongside Western governments, NATO, various branches of the US military, and a number of major defense contractors and corporations.
The Media on Trial event, organised by the UK Column and Frome Stop War, is banned from Leeds Library by Leeds City Council. Other venues follow Leeds City Council's lead. The event is eventually hosted in the grounds of the Baab Ul Ilm mosque in Leeds.
Theresa May announces the establishment of "a new Rapid Response Mechanism (RRM)", following Britain's proposal for "a new, more formalised approach to tackling foreign interference across the G7" at the G7 Foreign Minister's meeting the previous month.
This agreement sends "a strong message that interference by Russia and other foreign states would not be tolerated," she says.
The Rapid Response Mechanism "will support preventative and protective cooperation between G7 countries, as well as post-incident responses", including:
- co-ordinated attribution of hostile activity
- joint work to assert a common narrative and response
In a press release, Facebook says:
Today we removed 32 Pages and accounts from Facebook and Instagram because they were involved in coordinated inauthentic behavior. This kind of behavior is not allowed on Facebook because we don’t want people or organizations creating networks of accounts to mislead others about who they are, or what they’re doing.
We’re still in the very early stages of our investigation and don’t have all the facts — including who may be behind this.
Facebook, Apple, Spotify, Youtube and Pinterest remove Alex Jones and Infowars from their platforms. Twitter initially refuses to do so, but follows suit shortly afterwards, following campaigns by mainstream media.
Telesur English, a multi-state-funded Latin American news network, says that Facebook has removed its page for the second time this year “without any specific reason being provided.”
“This is an alarming development in light of the recent shutting down of pages that don't fit a mainstream narrative,” they say.
The UK Council for Internet Safety (UKCIS) is the successor to the UK Council for Child Internet Safety (UKCCIS). It has an 'expanded scope' to improve online safety for everyone in the UK.
The Executive Board brings together expertise from a range of organisations in the tech industry, civil society and public sector, including:
- Apple
- BBC
- Childnet
- Children’s Commissioner
- Commission for Countering Extremism
- End Violence Against Women Coalition
- GCHQ
- ICO
- Independent Advisory Group on Hate Crime
- Internet Matters
- Internet Watch Foundation
- Internet Service Providers and Mobile Operators (rotating between BT, Sky, TalkTalk, Three, Virgin Media, Vodafone)
- Microsoft
- National Police Chiefs’ Council
- National Crime Agency - CEOP Command
- Northern Ireland Executive
- NSPCC
- Ofcom
- Parentzone
- Scottish Government
- TechUK
- UKCIS Evidence Group Chair
- UKIE
- Welsh Assembly
Government says UKCIS will contribute to the Government’s commitment to make the UK the safest place in the world to be online, and will help to inform the development of the forthcoming Online Harms White Paper.
"From January 2019," they say, "Full Fact will begin reviewing images, videos and articles on Facebook, as the third-party factchecking initiative comes to the UK for the first time."
Full Fact says it wants to tackle disinformation at its source and give people the tools to spot it for themselves.
They say they will begin checking photos, video and give them a rating. Content with a lower rating will appear lower in Facebook news feeds, thereby reaching fewer people.
WhatsApp announces it will limit all its members to forwarding any single message up to five times in an effort to tackle the spread of false information. The previous limit was twenty times.
They say they made their decision having "carefully" evaluated the results of a half-year-long pilot.
"The forward limit significantly reduced forwarded messages around the world."
She proposes:
- New codes of conduct to rebalance the relationship between publishers and online platforms
- The Competition & Markets Authority to investigate the online advertising market to ensure fair competition
- Online platforms’ efforts to improve their users’ news experience should be placed under regulatory supervision
- Ofcom should explore the market impact of BBC News, and whether it inappropriately steps into areas better served by commercial news providers
- The BBC should do more to help local publishers and think further about how its news provision can act as a complement to commercial news
- A new independent Institute should be created to ensure the future provision of public interest news
- A new Innovation Fund should be launched, aiming to improve the supply of public interest news
- New forms of tax reliefs to encourage payments for online news content and support local and investigative journalism
The Rapid Response Unit, the Cabinet Office 'fake news unit' established in April 2018, is given permanent funding to continue its work monitoring social media and making sure the government narrative appears at the top of search rankings.
On a visit to Dublin, Facebook CEO Mark Zuckerberg admits that there is a lot more the social network can do to regulate social media content.
He told RTE News:
I think these days a lot of people don't want tech companies or any private companies to be making so many decisions about what speech is acceptable and what this harmful content that needs to be gets taken down.
So I think there is a role for a broader public debate here and I think some of these things would benefit from a more democratic process and a more active government role.
A joint Home Office / DCMS initiative, the white paper proposals include:
- A new statutory ‘duty of care’ to make companies take "more responsibility" for the content or activity on their services. This will apply to all platforms of whatever size which permit user interaction such as forums or comments, and carries the potential for massive fines and imprisonment.
- Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.
- Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
- A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.
A twelve week-consultation period ends on 1 July 2019.
Following a summit held during the summer of 2019, the BBC announces the creation of the Trusted News Initiative. The group announces several initiatives, including:
- Early Warning System: creating a system so organisations can alert each other rapidly when they discover disinformation which threatens human life or disrupts democracy during elections. The emphasis will be on moving quickly and collectively to undermine disinformation before it can take hold
- Media Education: a joint online media education campaign to support and promote media education messages
- Voter Information: co-operation on civic information around elections, so there is a common way to explain how and where to vote
- Shared learning: particularly around high-profile elections
Partner organisations at the summit include The European Broadcasting Union (EBU), Facebook, Financial Times, First Draft, Google, The Hindu, and The Wall Street Journal. Other partners are AFP, CBC/Radio-Canada, Microsoft, Reuters, and The Reuters Institute for the Study of Journalism, and they announce they are also consulting Twitter on areas of potential collaboration.
Ian Cobain, for Middle East Eye, writes:
The senior Twitter executive with editorial responsibility for the Middle East is also a part-time officer in the British Army’s psychological warfare unit, Middle East Eye has established.
Gordon MacMillan, who joined the social media company's UK office six years ago, has for several years also served with the 77th Brigade, a unit formed in 2015 to develop “non-lethal” ways of waging war.
The 77th Brigade uses social media platforms such as Twitter, Instagram and Facebook, as well as podcasts, data analysis and audience research to conduct what the head of the UK military, General Nick Carter, describes as “information warfare”.
Ursula von der Leyen becomes president of the European Commission. Her priorities include a new Digital Services Act: the EU's online safety legislation.
The initial response rings alarm bells for many people as the enormity of the UK Government's desire to censor legal content which "has the potential to cause harm" becomes clear. No definition of the kinds of content would "cause harm" is provided. Three days later, the full response is published.
Digital Secretary Nicky Morgan and Home Secretary Priti Patel announce the government "is minded to appoint communications watchdog Ofcom as the regulator to enforce rules to make the internet a safer place".
Zuckerberg is joined by Nick Clegg for most of his meetings with EU officials. Their aim: to help guide the forthcoming Digital Services Act.
Marianna Spring is appointed BBC 'Specialist Disinformation and Social Media Reporter' for BBC News and BBC World Service. Working with the team at BBC Trending and others across BBC News.
Nick Clegg reports on Facebook's initiative to deal with "Covid-19 misinformation" in a blog post following Facebook's launch of the "COVID-19 Information Center".
Clegg writes:
This is an evolving crisis, so as world health officials issue new guidance and warnings about COVID-19, we’ll continue working with them to ensure people have access to accurate and authoritative information across all of our apps.
Ursula von der Leyen assures the public the EU is working with social media platforms to spot and remove "misinformation". She reminds people to rely on authoritative sources for health information.
The UK government, through the Crown Commercial Service, awards a contract to five companies to "monitor and analyse media coverage of public communications activity across multiple channels, including broadcast, online and social media".
The core services required under the contract are:
Press Monitoring - monitoring of specified keywords and topics within print content, to include:
- content available in print but not available online; and
- content available online but not in print;
Online Monitoring - monitoring of news website content that includes specified keywords and topics – including monitoring of non-written content, such as infographics and imagery;
Social Media Monitoring - monitoring of social media content on feeds including, but not limited to, Twitter, LinkedIn or Facebook;
Broadcast Monitoring - monitoring content on television, radio and web broadcasts that includes specified keywords and topics; and
Human-driven Evaluation and Analysis – the selection, evaluation and analysis of the results of monitoring through human review.
The contract, to run until 31 May 2024, is worth £10 million, and is awarded to Cision Group Limited, LexisNexis, Onclusive, Press Data Limited and Unicepta UK Limited.
The public is given four weeks to provide feedback on the scope of Digital Services Act regulation of large online platforms.
Facebook is launches a campaign in collaboration with Full Fact to help users "identify fake news" as they come under increased pressure to deal with "misinformation" spreading on its platforms.
The BBC launches Project Origin, an "alliance of leading organizations from the publishing and technology worlds, working together to create a process where the provenance and technical integrity of content can be confirmed. Establishing a chain of trust from the publisher to the consumer."
The UK government's UK Research and Innovation provide £7 million from their Strategic Priorities Fund for a new National Research Centre on Privacy, Harm Reduction and Adversarial Influence (REPHRAIN), bringing together researchers from the universities of Bristol, Edinburgh, Bath, King’s College London and UCL.
REPHRAIN will also work with industry, academics and the voluntary sector to develop new technologies to help human moderators tackle the spread of online disinformation and identify harms linked to online targeting and manipulation.
Digital Secretary Oliver Dowden and Health Secretary Matt Hancock agree new measures with social media platforms to limit the spread of vaccine misinformation and disinformation and help people find the information they need about any COVID-19 vaccine.
At a virtual roundtable to address the growth of vaccine disinformation, Facebook, Twitter and Google commit to the principle that no company should profit from or promote COVID-19 anti-vaccine disinformation, to flag content more swiftly, and to work with authorities to promote "scientifically accurate" messages.
In a Youtube Official Blog post they state:
Yesterday [8 Dec] was the safe harbor deadline for the U.S. Presidential election and enough states have certified their election results to determine a President-elect. Given that, we will start removing any piece of content uploaded today (or anytime after) that misleads people by alleging that widespread fraud or errors changed the outcome of the 2020 U.S. Presidential election.
For example, we will remove videos claiming that a Presidential candidate won the election due to widespread software glitches or counting errors. We will begin enforcing this policy today, and will ramp up in the weeks to come … As always, news coverage and commentary on these issues can remain on our site if there’s sufficient education, documentary, scientific or artistic context.
The European Commission proposes a second set of regulations – the Digital Markets Act is added to the Digital Services Act. Margrethe Vestager emphasises that the updated set of rules will ensure "safe consumption and competition online".
Speaking to the European Parliament’s internal market committee, Prabhat Agarwal, an official who heads up the eCommerce unit at the European Commission’s DG Connect, states that the EU executive’s Digital Services Act attempts to realign the balance between effective content removal and preserving freedom of expression online:
It is no longer acceptable in our view that platforms take some key decisions by themselves alone without any supervision, without any accountability, and without any sort of dialogue or transparency for the kind of decisions that they’re taking ... Freedom of expression is really a key value in this.
Newsguard's Anna-Sophie Harling announces Healthguard, "a free tool to help you determine the credibility of websites publishing information about COVID-19 and vaccines".
The Clearing House has existed for many years, but Open Democracy's report exposes its operation for the first time. They report:
The Cabinet Office’s influence on FOI does not stop at the ICO. The Cabinet Office is also in charge of the Clearing House – a small unit that monitors inbound ‘sensitive’ requests across Whitehall and coordinates the responses of multiple departments ... FOI and Subject Access Requests by openDemocracy reveal that the Clearing House shares with a range of Whitehall departments a daily update containing the names of journalists and campaigners, the requests they have submitted and advice on how referring departments should respond.
Adobe, Arm, BBC, Intel, Microsoft and Truepic set up a coalition "to develop end-to-end, open standard for tracing the origin and evolution of digital content".
Named the Coalition for Content Provenance and Authenticity, it claims to "address the prevalence of disinformation, misinformation and online content fraud through developing technical standards for certifying the source and history or provenance of media content."
The Commission for Countering Extremism publishes 'Operating with Impunity - Hateful extremism: The need for a legal framework', a report written by Lead Commissioner Sara Khan, and Sir Mark Rowley. The report "demonstrates how many hateful extremists are able to operate lawfully. This is due to a lack of legislation designed to capture the specific activity of hateful extremism."
Sara Kahn says:
Extremist groups whether neo-fascist, neo-Nazi, Islamist or others are able to operate lawfully, freely and with impunity. They are actively radicalising others and are openly propagating for the erosion of our fundamental democratic rights. Their aim is to subvert our democracy. This is a threat to our civilised democratic order … and requires a robust, necessary and proportionate legal response.
The main topics of this online policy conference are the scope and the criteria of the Digital Markets Act, the proposal’s impact on competition in Europe and the role of the European Commission as enforcement body.
Darren Boyling, the new head of the FCDO Counter Disinformation and Media Development Programme takes part in a panel discussion entitled "IS DISINFORMATION POWER? Direct impact on democratic societies". The event is run by the Portuguese Security Information Service.
Leaders from the UK, Canada, France, Germany, Italy, Japan, the US and EU sign a declaration containing a series of shared principles on how to tackle the global challenge of online safety. They say:
We note that despite some positive steps and technological improvement, harmful content and activity remains widespread online. This undermines our democratic values, risks the physical safety and wellbeing of children … reduces online participation and diminishes trust in the online environment.
The draft Online Safety Bill is published by the UK Government. With an amazing sleight of hand, the Bill will "help protect young people and clamp down on racist abuse online, while safeguarding freedom of expression". The Government claims that the Bill will include:
- New additions to strengthen people’s rights to express themselves freely online, while protecting journalism and democratic political debate in the UK.
- Further provisions to tackle prolific online scams such as romance fraud, which have seen people manipulated into sending money to fake identities on dating apps.
- Social media sites, websites, apps and other services hosting user-generated content or allowing people to talk to others online must remove and limit the spread of illegal and harmful content such as child sexual abuse, terrorist material and suicide content.
- Ofcom will be given the power to fine companies failing in a new duty of care up to £18 million or ten per cent of annual global turnover, whichever is higher, and have the power to block access to sites.
- A new criminal offence for senior managers has been included as a deferred power. This could be introduced at a later date if tech firms don’t step up their efforts to improve safety.
The European Commission publishes its guidance on how the Code of Practice on Disinformation, the first of its kind worldwide, should be strengthened to become a more effective tool for countering disinformation.
Nadine Dorries asks social media companies to block Russian media in UK. This is followed up with an internet wide ban supported by all internet service providers.
Ofcom revokes RT's broadcasting licence. They say:
We have done so on the basis that we do not consider RT’s licensee, ANO TV Novosti, fit and proper to hold a UK broadcast licence.
The EU Parliament and Council reach a provisional political agreement on the Digital Services Act (DSA). Together with the Digital Markets Act, the DSA will set the standards for "a safer and more open digital space for users" and "a level playing field for companies for years to come".
The United States Department of Homeland Security announces the establishment of the Disinformation Governance Board to "protect national security by disseminating guidance to DHS agencies on combating misinformation, malinformation, and disinformation that threatens the security of the homeland".
Paypal, without explanation, suspends the accounts of a series of individual journalists and media outlets, including Consortium News and Mint Press.
The Disinformation Governance Board and its working groups are "paused" pending review, and board head Nina Jankowicz resigns, as a result of public backlash.
Department of Homeland Security Secretary Alejandro Mayorkas disbands the Disinformation Governance Board.
Paypal shuts down the accounts of the Free Speech Union, its founder Toby Young, and his opinion and news website the Daily Sceptic with no clear explanation.
Nina Jankowicz, erstwhile head of the US Department of Homeland Security's failed Disinformation Governance Board, announces the launch of The Hypatia Project, which she conducts at the UK-based Centre for Information Resilience. The project's stated aim is to "combat gendered abuse and disinformation online". The Hypatia Project is part funded by the UK government.
PayPal reinstates the accounts of the Free Speech Union, Toby Young and the Daily Sceptic after it was accused by MPs of imposing an “orchestrated, politically motivated” ban.
European Council gives its final approval to the Regulation on a Digital Services Act. Once the Digital Services Act comes into effect, "platforms will not only have to be more transparent, but will also be held accountable for their role in disseminating illegal and harmful content". The Regulation will apply from 17 February 2024.
More than forty fact-checking organisations across Europe agree a "professional Code" that "defines the standards of methodology, ethics and transparency required to combat misinformation effectively and with integrity".
Index on Censorship releases their report 'Surveilled and Exposed'. The report includes a legal opinion which highlights the questionable legality of the provisions within the Online Safety Bill. They say:
There has been significant commentary on the flaws of the Online Safety Bill, particularly the harmful impact on freedom of expression from the concept of the ‘Duty of Care’ over adult1 internet users and the problematic ‘legal but harmful’ category for online speech. Index on Censorship has identified another area of the Bill, far less examined, that now deserves our attention. The provisions in the Online Safety Bill that would enable state-backed surveillance of private communications contain some of the broadest and powerful surveillance powers ever proposed in any Western democracy.
They also warn on the implications for services providing end-to-end encrypted chat:
The Bill as currently drafted gives Ofcom the powers to impose Section 104 notices on the operators of private messaging apps and other online services. These notices give Ofcom the power to impose specific technologies (e.g. algorithmic content detection) that provide for the surveillance of the private correspondence of UK citizens. The powers allow the technology to be imposed with limited legal safeguards. It means the UK would be one of the first democracies to place a de facto ban on end-to-end encryption for private messaging apps.
The UK Government are unable to overcome of opposition to the 'legal but harmful' provisions in the Online Safety Bill, and are forced to drop them entirely. Instead they announce that 'Tier 1' companies (the social media giants), will be forced to apply their 'community standards' effectively as expressed in their terms and conditions.
There is still zero clarity on what this will mean in practice, since the fine print will only become clear through secondary legislation and rules written by Ofcom long after the Online Safety Bill reaches the statute book.
Google and YouTube announce a $13.2 million grant to the International Fact-Checking Network (a Poynter Institute project) to launch a new Global Fact Check Fund to support the IFCN's network of 135 fact-checking organisations from 65 countries covering over 80 languages.
Ofcom announces Gill Whitehead, a former Google and Channel 4 executive and one of the UK’s senior leaders in data and technology, will oversee its new duties as the regulator for online safety.
Gill will lead Ofcom’s Online Safety Group from April 2023, reporting to Chief Executive Dame Melanie Dawes.
Big Brother entitles their report Ministry of Truth: The secretive government units spying on your speech. They say:
Secretive Whitehall units have been recording political dissent on social media under the guise of tackling misinformation, a Big Brother Watch investigation has found. Politicians, academics, activists, journalists and even members of the public have been subjected to monitoring by Whitehall officials, and an “information warfare machine” in the British Army.
Key Findings:
- Anti-fake news units in the Cabinet Office and DCMS spent much of their time monitoring social media for political dissent, under the guise of “counter-disinformation” work.
- Labour leader Sir Keir Starmer, Conservative MPs David Davis & Chris Green, journalists including Peter Hitchens and Julia Hartley-Brewer, and academics from the University of Oxford and University College London all had comments critical of the government recorded by the anti-fake news units.
- Soldiers from the Army’s 77th Brigade collated tweets from British citizens about Covid-19 at the start of the pandemic and passed them to the Cabinet Office. Troops also conducted “sentiment analysis” about the government’s Covid-19 response.
- The Rapid Response Unit [Cabinet Office] pressured a Whitehall department to attack newspapers for publishing articles analysing Covid-19 modelling that it feared would “affect compliance” with pandemic restrictions.
- RRU staff featured Conservative MPs, activists and journalists in “vaccine hesitancy reports” for opposing vaccine passports.
- The Counter Disinformation Unit [DCMS] has a special relationship with social media companies it uses to recommend content be removed. Third party contractors trawled Twitter for perceived terms of service violations and passed them to CDU officials.
- Front organisations aimed at minority communities were set up by the Research, Communications and Intelligence Unit [Home Office] to spread government propaganda in the UK.
Answering a question during 'oral questions to the Secretary of State for Defence', Wallace says he will investigate the Big Brother Watch claims of domestic surveillance by 77th Brigade.
Ofcom appoints Fadzai Madzingira as Online Safety Supervision Director
Fadzai brings a wealth of experience in online trust and safety, and joins us from her most recent role at Salesforce, where she, as a Director in the Office of Ethical and Humane Use of Technology, was responsible for managing the company's compliance of the EU Digital Services Act.
Prior to this, Fadzai was the Global Hate Speech Content Policy Lead at Meta, where she led the policy development strategy and product initiatives for hate speech policy enforcement across the company’s suite of products.
BBC Verify is transparency in action – fact-checking, verifying video, countering disinformation, analysing data and explaining complex stories in the pursuit of truth. This is our promise to consumers - we understand that their trust must be earned and we will show them how we are doing that each and every day.
Speaking to Channel 4 News, Whittaker says:
There is no way to create a back door that only the good guys can walk through. And what's being proposed here in the context of end-to-end encryption is a back door. And we know from decades of history, from decades of serious research, that there's no such thing as a safe back door. If the British police can get in, hackers can get in. If the British police can get in, hostile nations can get in. If the British police can get in, Putin can get in. The Iranian government can get in, and others wanting to do harm can get in. So it's really important that we maintain the security and integrity of these systems.
She continues:
And what is specified is a regime that would give Ofcom the power to demand that everyone in the UK download spyware that checks their messages before they're sent against a database of what is permissible to say and send and what is not permissible. And that is a precedent that authoritarian regimes are looking to the UK to set, to point to a liberal democracy that was the first to expand surveillance in the terms of the UN Human Rights Commissioner. This is unprecedented paradigm shifting surveillance and paradigm shifting not in the good way.
Academics in the UK publish an open letter on their concerns around the UK’s Online Safety Bill highlighting their concerns over the potential ban that legislation will place on end-to-end encryption in the UK. They say:
In brief, our concern is that surveillance technologies [will be] deployed in the spirit of providing online safety. This act undermines privacy guarantees and, indeed, safety online.
Speaking to France Info, Thierry Breton, the European Commissioner for Internal Market said:
Platforms that foster content inciting hatred, revolts, and harm including property damage will be obligated to expunge such content instantly. Any lapse in compliance will be met with swift penalties. Our monitoring teams stand ready to act at a moment’s notice. Platforms unable to meet the urgency required could face not just financial penalties but also a potential ban from operating within our jurisdictions.
The UK Cabinet Office announces a contract to "ensure communications are underpinned by effective media monitoring, Cabinet Office and the His Majesty (HM) Treasury Communications Team intend to procure an online platform which provides access to previously aired tv and radio broadcast footage from a range of UK national and regional channels."
The Online Safety Bill is signed off in the House of Commons, completing its journey through Parliament. The UK government claims the bill "will make the UK the safest place in the world to be online by placing new duties on social media companies".