UK tries to regulate social media platforms
Editor’s Note: Guest columnist Kate Jones is replacing Emily Taylor this week.
Efforts to regulate social media platforms are gaining momentum in the UK. In May, the UK government released its online safety bill, which will be considered this fall by a joint committee of MPs and the House of Lords chaired by MP Damian Collins. Collins led Parliament’s 2018 Cambridge Analytica scandal briefing and is a leading UK voice on disinformation and digital regulation. At the same time, the House of Commons Subcommittee on Mischief and Online Disinformation will also investigate the bill.
The inquiries follow a report by the House of Lords Committee on Freedom of Expression in the Digital Age, released last month, and a Law Commission report recommending modernization of communication offenses in the UK.
As the new social media regulation is a political goal, the Online Safety Bill offers the UK an opportunity to demonstrate the strength of its independent regulatory approach, even as the EU develops its bill. law on digital services in parallel. A wide range of industry and civil society voices are involved in the digital regulation debate in the UK. Yet despite the heated conversation, the challenges of regulating social media remain far from resolved.
The 133-page bill would create a skeletal regulatory framework that would be fleshed out over time with both secondary legislation and regulatory codes of practice. This framework approach allows for flexibility and incremental development, but it draws criticism for the extent of the discretion it leaves to Ofcom, the UK’s independent communications regulator. Likewise, the bill arguably allows the Secretary of State for Digital, Culture, Media and Sports to exercise considerable political influence over the scope of free speech with little parliamentary oversight.
Several years after the online harm debate began, the tensions between free speech and harm protection remain familiar but unresolved. Nowhere are these tensions perhaps more visible than when it comes to whether platforms should have legal obligations regarding content that is lawful but harmful to adults, a thorny subject than draft EU legislation, unlike that of the UK, refuses to deal. . On the one hand, the inclusion in the UK bill of “lawful but harmful” content has been widely criticized as legitimizing censorship and restriction of speech, in violation of human rights law. On the other hand, there are serious concerns about online speech and content that does not meet the thresholds of illegality but can and does cause harm. These include online expressions of racism, misogyny and abuse, as seen vividly in England in the wake of the European Football Championships, and misinformation which can have a major impact. on security and democracy, as currently amply illustrated with regard to COVID-19 vaccines.
Several years after the online harm debate began, the tensions between free speech and harm protection are familiar but unresolved.
Here, a new approach may be emerging, as voices calling for more attention to the spread of harmful material, rather than its mere existence, grow louder. The House of Lords committee report on free speech proposed that much of the bill’s provisions on “lawful but harmful” content be replaced with a new design-based obligation, requiring platforms to take steps to ensure that “their design choices, such as as reward mechanisms, architecture of choice and content curation algorithms, mitigate the risk of encouraging and amplifying uncivilized content. The committee recommends that larger platforms allow users to make their own choices about what types of content they see and from whom. Richard Wingfield of Global Partners Digital told me, “If content ranking processes were more transparent and accessible, social media companies might be required to be open to incorporating alternative algorithms, developed by third parties, which users could choose to moderate and categorize what they see online.
These proposals are long overdue. It is not the existence of abuse and disinformation that is new in the digital age, but their viral transmission. For too long, a heady combination of business incentives and a lack of transparency, accountability and user empowerment has resulted in an exponential expansion of the reach of shocking, emotional and conflicting content. These design-based proposals are likely to encounter resistance from platforms; even the Facebook Oversight Board couldn’t access Facebook’s information about its algorithms. But they are starting to tackle the real concern in society about legal but harmful content: not that it gets said, but that it gets broadcast.
Arguably, the bill should not only address the design of platforms but also, like the European Action Plan for Democracy in the EU, tackle the deliberate use of manipulative techniques, such as disinformation, by those who abuse social media platforms to distort public and political opinion. or deliberately sow societal division.
If the UK government can take comfort in the many criticisms of the bill, it is because it has come in equal measure from all sides of the online harm debate. The structure of the bill is complex and, for many, its provisions are too vague, including its definition of harm. Some fear that its skeletal framework makes implementation impossible to anticipate and depends entirely on possible Ofcom codes of practice. Others see this gradual approach as positive, allowing for significant regulatory change over time. For platforms, its provisions can be too onerous. Others may see platforms being given too much power to control online speech. For the champions of freedom of expression, the bill’s imposition on platforms of a duty to “take into account”, or to take in the bill or the risk of a chilling effect of a excessive implementation. Privacy advocates argue that despite the obligation to consider privacy, the bill would legitimize a much more intensive review of personal communications, including encrypted messaging, than the current practice.
There are also objections to the bill’s omissions. It does not cover online advertising fraud, despite the recommendations of a parliamentary committee. It does not give Ofcom or social media platforms the power to deal with urgent threats to public safety and security. And it does not directly address the complex issue of anonymity. The media, already threatened by the social media business model, doubts the bill’s protections for journalistic content are strong enough.
Regulation of social media is vital because government, not business interests, is the democratic guardian of the public interest. The Online Safety Bill is a forerunner in proposing a risk-based regulatory model to tackle harm online, unlike regulatory approaches that violate human rights and media freedom by prohibiting harm perceived as “fake news”. Despite its criticisms, the bill – with its creation of social media obligations, transparency and accountability to a strong and independent regulator – is a hugely positive development. The time has come to reconsider the aspects that could infringe on human rights, in particular the clauses on lawful but harmful content, and replace them with new provisions that would address the heart of the problem of online damages. .
Kate Jones is an Associate Fellow of the Chatham House International Law Program, Senior Associate at Oxford Information Labs and Associate at the Oxford Human Rights Hub. Previously, she spent many years as a lawyer and diplomat at the British Foreign and Commonwealth Office in London, Geneva and Strasbourg. She also headed the Diplomatic Studies program at the University of Oxford. Follow her on Twitter@ KateJones77.