The phenomenon of “fake news” on social media has become notorious as a recurring topic in our news cycles. Looking back at the recent months, I see it being viewed with ever-increasing alarm by social media companies who attempt to regulate it; by their users, and by governments around the world who grapple with setting the laws to combat it. What exactly is “fake news”; how exactly is the law keeping up with the problem, and what are the potential solutions?
'Information Overload'
Firstly, “fake news” can be broadly defined as “false or misleading content” which may include conspiracy theories; fabricated content; deep-fake video media; misleading headlines and flat-out falsehoods. These kinds of acts may not be intentionally deceptive (misinformation), or are intentionally created to deceive and manipulate audiences (disinformation). Fake news is a problem because it engenders harm by spreading lies that may incite hatred and violence; hijack public debate and undermine the democratic process.
Self-evidently, the complexity and volume of fake news means it is very hard to regulate. It is “amplified” on social media sites because platforms like Twitter and Facebook use algorithms which promote echo chambers, leading to a kind of “information overload” where bots (who impersonate humans), can target audiences with low quality information and suck them into extreme narratives. For example, it is estimated that up to 15% of Twitter users were bots in 2017.
What Stage Are We At Now?
Currently, social media companies set their own guidelines on regulating fake news. Facebook’s Community Standards, for example, refuses to remove false information because the company states there is “a fine line between false news and satire or opinion”. Legal obligations imposed on social media companies by themselves can therefore be vacuous and at best vague. For example, Twitter banned former President Trump this January for inciting violence, and Facebook followed suit, yet neither company intervened with the former President when he repeatedly peddled fake news prior to January.
This is where countries have stepped in to impose legal obligations on social media companies: New Zealand has announced this month a future law that will regulate Facebook into moderating harmful fake news on its platform, and the United Kingdom has announced a new Online Safety Bill which will force social media companies to “remove harmful content quickly” or face hefty fines. This surge by Western nations to regulate social media is a welcome one, but a balance must be struck - critics to the UK bill have accused the government of “censorship”. Legal regulations would therefore benefit from strictly defining the kind of fake news that companies would be mandated to take down, to preclude challenges on the grounds of protecting free speech.
There is also the issue of consistency - if the UK passes a law placing new obligations on Facebook to regulate free speech on its platform, would the company be obligated to carry out these new regulations in all the jurisdictions it operates in? How would governments then ensure that social media platforms comply with the new regulations they have set?
The Future for Fake News Regulation
There have been many bold proposals put forward on future regulation of fake news - Nikleqicz calls for a ‘notice and correct’ procedure, which would set up a formal process whereby users could report fake news items, after which the platform must respond, and would decide whether to intervene or not. However, social media platforms could still exercise their own discretion and are still entitled to do little under the proposal. A more fruitful proposition is put forward by the Forum for Information and Democracy, who would mandate social networks to release “details of their algorithms” to independent checkers, and put “circuit-breakers … on newly viral content” so that it can be temporarily fact-checked before it spreads. This example of strict legal regulations tackle the root of fake news, but it is doubtful if such restrictions would garner enough support to become law.
Having said all of this, regulating fake news has the backing of long-standing international law. The 1936 Broadcasting Convention, for example, is a treaty that over thirty states still subscribe to. It was instituted by the League of Nations (now the UN), and states that the transmission of “inaccurate” media that violates “good international understanding” must be taken down. Scholars have argued that this treaty provides a actionable grounding for regulations against false news.
Ultimately, regulating fake news is like a minefield - its content varies in its severity, and therefore it is more workable (and less controversial) to compel social media companies to remove blatantly false information compared to content that may simply misrepresent and manipulate facts. Educating users on how to spot and avoid fake news must be a priority for future regulations. Regardless, social media giants must not shy away from cleaning up social media, just because the means to do so may be difficult.
Comments