How social media is adapting to the new censorship
The biggest social media platforms in the world are bigger than ever. With Facebook boasting nearly three billion monthly active users, it dominates the social media landscape, leaving Twitter, Instagram and TikTok behind. But as these websites and apps continue to add more users, moderating content and ensuring these platforms are safe for everyone becomes increasingly difficult.
On Twitter alone, it is estimated that around 200 billion tweets are sent per year. It is impossible for human moderators to control even a small percentage of communication on the platform.
A wide range of toxic and abusive messages, posts and comments are sent to users of these social media apps on a daily basis, with these platforms being accused of not taking this issue seriously enough. Press freedom organization Reporters Without Borders filed a lawsuit last year, claiming that Facebook “allows disinformation and hate speech to thrive on its network – hate in general and hatred against journalists – contrary to claims made in its terms of service and through its advertisements”.
Governments and regulators are also considering how best to oversee these tech giants, with the European Union and the United Kingdom proposing new oversight powers and regulatory regimes to boost competition and protect users.
Fighting misinformation and fake news is no small task, with new untrustworthy websites popping up as soon as previous ones are shut down or removed from sharing. Facebook launched a fact-checking initiative in 2016 that works with a range of independent fat-checking organizations to ensure fake and untrue news is reported to users.
Despite this, researchers also found that fake news on Facebook spreads faster than other social media platforms. The researchers tracked the internet usage of more than 3,000 Americans and found that Facebook was the go-to site for untrusted news sources more than 15% of the time.
In a blog post by Facebook News Feed VP Adam Mosseri acknowledges the harm fake news can have on users and explains how the platform is actively taking action to curb the spread of fake news and inspire users to make informed decisions.
Mosseri says an effective approach to tackling fake news is to remove any economic incentive for publishers of such information. “We found that a lot of fake news is motivated by financial reasons. These spammers make money by impersonating legitimate news publishers and posting pranks that trick people into visiting their sites, which are often mostly advertisements,” Mosseri adds.
The fast-growing app TikTok has come under pressure from parents and internet safety experts for not doing enough to prevent young people from accessing the app and watching age-inappropriate videos. In recent days, TikTok announced a series of updates aimed at making the app’s design more age-appropriate, as well as introducing the ability for creators on the platform to select the age at which their content is appropriate.
Due to the complex and often non-transparent nature of the algorithms deployed by social media platforms, content that should not be widely shared and could even be harmful is sometimes picked up and amplified. A Media Matters report from last year found that the TikTok algorithm promotes homophobia and anti-trans violence, despite the platform having community guidelines that prohibit hateful behavior towards individuals or groups in based on their sexual orientation or gender identity.
There is clearly no magic bullet that can solve the myriad of challenges that social media giants face when trying to vet their content for inappropriate material. But governments, regulators and users are all raising issues with problematic content. a solution is now closer than before, even if many proposals are flawed.