Misinformation In A Digital World – Who Decides What’s Right?
There’s no escaping the fact that coronavirus has completely dominated all conversation for the past 11 months. From newspaper headlines to social media updates, we’re constantly consuming a barrage of pandemic-related posts, unsure of what’s real and what isn’t.
Less reported though, are the regulations placed on platforms around what can and can’t be said in relation to the virus. Tagged as ‘misinformation’, there are a few who continue to challenge the notion of where such a claim should be applied, and who has the right to decide what we can and can’t say.
Earlier this year, TalkRadio was temporarily de-platformed by YouTube after allegedly violating its guidelines relating to Covid-19 content – by posting opposing views to the lockdown and highlighting the collateral damage they could cause.
Already governed by strict balance rules set by Ofcom, the incident sparked a free speech debate in which cabinet minister Michael Gove condemned the move, stating: “I don't believe in censorship. I think it's absolutely right that people should ask questions.”
So, it begs the question, ‘is information being filtered multiple times as a result?’ Facebook, Google and Twitter are most people’s go-to source for news on current events. But often, these platforms are simply showing reposted content from other news sources – many of which are already fact checked and adhere to stringent guidelines.
Big tech is constantly under pressure to moderate its content, and something that has intensified with incidents such as Cambridge Analytica’s involvement in UK elections and the Russian headlines surrounding the US 2016 elections. This brought with it a realisation that it is possible to manipulate the public via social media.
In the Netflix documentary, The Social Dilemma, programmers and creators that quit big tech companies did so because of their moral principles, and go on to describe the methods used to create echo chambers for the users based on their algorithms – which only feed information to the user based on their existing interests and beliefs.
Other tactics, for example, involve techniques known as ‘rabbit holes’, whereby if a user clicks on a subject that can be conspiratorial in nature, they will be served increasing information on the topic without their choosing.
As a result, US congress has come down hard on Facebook over such ‘meddling’. But, while the platform – and its peers – have promised to ensure it doesn’t happen again, only time will tell.
How is big tech controlling what we read?
Since March 2020, there have been debates raging across Twitter, Facebook, and other public forums about the hard truth behind the official COVID-19 figures shared with the public. While we have no way of knowing – for certain – which reports are right or wrong, it’s important that such discussion is at least permitted to exist.
In a bid to address such disagreements, Facebook, Twitter and Google have pledged to work alongside a coalition of governments – including the UK and Canada – to fight ‘misinformation’ and conspiracy theories around vaccinations.
Formed by the British fact-checking charity Full Fact, the group seeks to establish cross-platform standards for tackling misinformation. But with tech giants spearheading the campaign – and much to gain from a reliance on all-things-tech – who decides what is misinformation, and what is ‘healthy debate’?
I have personally seen instances where scientists quote peer reviewed evidence, only to fall foul of fact checking services and have their views removed or tarnished. In turn, this raises a larger question around ‘who polices the fact checkers – and can they be manipulated too?’
But, have culls on freedom of movement morphed into censorship of speech?
Interestingly, ‘free speech’ platform Parler disappeared from the Apple and Google stores overnight after the tech giants cut ties with the platform following clashes at the US Capitol, with the site identified as the platform in which the attacks were co-ordinated.
Citing ‘lack of moderation’ as the motivation for doing so, a cynic might argue that it is was a coordinated takedown of a new and popular rival.
Ironically, some conflicting opinions pointed to Google-owned YouTube as an enabler.
Of course, as private companies, we can’t force them to do one thing or another, but it should be the responsibility of governments to regulate and legislate against monopolies – even monopolies of information. In Australia, for example, the country is trying to make Google pay publishers for sharing news articles – resulting in Google threatening to remove their search service.
At a time of high stress and increasing mental health issues, society should be permitted to hear – and consider – opposing beliefs and views, without manipulation from marketing-based programming.
The task of preventing manipulation of opinion by companies and foreign powers remains a huge problem for society. With the right strategy, it could be catastrophic and destabilising for countries and political parties, which highlights the fragility of the window in which we now view the world.
Lockdowns force us to be more reliant on social media for news and social interaction – that’s why it’s important that free speech exists in its purest form more than ever, along with uncensored investigative journalism.
With the pandemic having seen many countries remove the right of free movement, such legislation must be questioned regularly – and openly. Not for fear of encouraging people to disobey the rules, but to avoid repeating mistakes of the past.
The removal of misinformation is, of course, important – but we must be vigilant to ensure the price we pay is not too high.