On 12 Feb 2020 the then Secretary of State for DCMS, together with the Home Secretary, published the Government’s initial response into the Online Harms consultation. The following day, Boris Johnson reshuffled his ministerial team and, just a few weeks thereafter, the country went into the first COVID lockdown.
One might reasonably have feared that the business of government was going to be the least likely to adapt to the new virtual working life but, to Whitehall and parliament’s credit, they surprised us all. Select committee hearings from bedrooms, dining rooms, or broom closets were the new norm. Roll forward to year end, 2020, and a suite of announcements showed just how much work had been done in the intervening months: the government’s response to the CMA’s Market Study on Competition in Digital Advertising; the Digital Market Taskforce’s Advice to Government and the Government’s full response to the Online Harms (now Online Safety) consultation.
The UK is at the forefront of countries looking holistically at the regulation of tech. Ministers and officials have come to appreciate just how challenging this is. Facebook called for legislation in the area of online content on 30 March 2019. On 8 April that same year, the UK government published its original Online Harms consultation, which outlined some 27 different types of harm that might go into primary legislation. The responses to the consultation led to the government determining that it would be more effective to create a ‘system-wide’ approach, placing the burden on tech platforms to discharge their responsibilities, subject to the oversight of a regulator in the form of Ofcom. We agree that this approach is more likely to prove effective than the one originally envisaged. But we await the detail of the draft Bill, which will probably require a period of scrutiny before it is finalised.
At the core of the challenge are serious societal issues, such as the balance between freedom of expression versus harms done through speech online, or how to protect the privacy of personal communications whilst tackling criminal activity, which follows from the government’s decision to include private messaging in the scope of the Bill. The most difficult aspect will be how to ask platforms to police ‘harmful but lawful’ activities, without causing platforms to be overly cautious and take down potentially offending examples of ‘free speech’, especially where the government might bring forwards criminal liabilities for Directors of the platforms in policing what people say to one another.
In assessing these issues, it will be important to be led by the facts. Facebook’s most recent Community Standards Enforcement Report shows that our AI systems are now detecting the vast majority of concerning content before it comes to the attention of other users. Specifically, thanks to the improvements in our detection rates, Prevalence has decreased for Hate Speech (from 0.11% → 0.08%) and Violent and Graphic content (0.06% → 0.05%) on Facebook. What this means is that for every 10,000 items, there are 8 items that might be called ‘hate speech’ and 5 items that might be violent or graphic. We are not satisfied that this is good enough, and will continue to improve our systems to reduce these figures further, but it is a far cry from what one might be led to believe to be the case, when one listens to some commentators, who give the impression that the platform is awash with harmful material.
We realise the importance of getting this right and look forward to working with parliamentarians and other interested parties as the legislative process approaches. The first suite of decisions by the independent Oversight Board shows that this is far from easy, and government is right to take the time needed in drawing up the proposed legislation. At the end of the day, this is not about platforms versus government or platforms versus society, it is about what one citizen is allowed to say to another online.
Edward Bowles is Director of Public Policy for Northern Europe at Facebook