The UK government announced plans this week to grant Ofcom with the power to fine social media companies if they fail to protect users from harmful content.
Conservative party discussions around regulating online content have taken place since David Cameron was party leader, with a Home Office press release from 2015 stating the former Prime Minister was "prepared to legislate [against under 18s’ access to pornographic websites] if the industry fails to self-regulate".
Cameron stood down less than a year later, leaving the issue of age-verification technology in the hands of his successor Theresa May. While Part 3 of the Digital Economy Act 2017 brought the issue back into the headlines by proposing age verification for online pornography it was ultimately shelved in October of 2019 due to a catalogue of errors.
Somewhat crucially, social media sites were not included in the remit of the proposed 2017 legislation. However, the Online Harms white paper that included the age verification proposal also made reference to a "duty of care" requirement on technology companies and content providers to protect vulnerable people from harmful content. These are the plans the government has moved forward on this week.
Under these new proposals, Ofcom will not be allowed to remove posts but will have the power to impose penalties relating to illegal and 'harmful' content.
Currently, providing they’re not seen to endorse posts that contain illegeal or harmful material, social media companies are largely exempt from penalties, even if a user uploads pro-terrorist material or child abuse imagery onto their platforms. This new legislation will grant Ofcom the power to force companies to remove the material more quickly, and prevent most of it from being posted in the first place.
The regulators will also be given the power to police content that is deemed to be harmful, but not illegal by ensuring social media companies properly enforce their own terms and conditions. While the government is yet to provide a concrete definition of “harmful content”, its Online Harms white paper makes reference to content that is “hateful” or has a “psychological and emotional impact”.
Companies such as Facebook, Twitter and Google will be required to explicitly outline which content is permitted on their platforms, allowing Ofcom to hold them to account if these pledges are not adhered to. The regulations will only apply to companies that host user-generated content.
The penalties for failure are still unknown, although the government claims they will be “fair, proportionate and transparent”.
Home secretary Priti Patel sought to justify the government’s decision, stating: “While the internet can be used to connect people and drive innovation, we know it can also be a hiding place for criminals, including paedophiles, to cause immense harm. It is incumbent on tech firms to balance issues of privacy and technological advances with child protection.”
Divided opinion
Full details of the legislation and the new powers granted to Ofcom will be announced later in the spring, however, opinion is already divided on this new policy proposal.
Ofcom has welcomed the decision to be appointed regulator and in a statement said it “will work with the government to help ensure that regulation provides effective protection for people online and, if appointed, will consider what voluntary steps can be taken in advance of legislation.”
Lobbying group techUK, as well as child protection charity the NSPCC, are two other prominent voices that have come out in support of the proposals. Vinous Ali, associate director of policy at techUK, said the organisation is “pleased to see continued progress being made by government on the issue of online harms".
“The evolution in thinking demonstrates a commitment from government to building a framework that is effective and proportionate - protecting and empowering users whilst ensuring the UK remains pro-innovation and investment,” Ali continued.
While Ali commented that the direction of travel was "encouraging", she erred on the side of caution around scope and process, advising the government to provide further clarity on these points.
Others were less encouraging, with some media organisations such as the Daily Mail and News Media Association claiming that the government’s proposals could unintentionally result in censorship of their own websites.
Furthermore, the white paper states: “Under our proposals we expect companies to use a proportionate range of tools including age assurance, and age verification technologies to prevent children from accessing age-inappropriate content and to protect them from other harms.” Given the government’s failure to make its age-verification policy workable, questions are already being asked about whether parts of this new proposal will fall foul to some of the same errors.
There is also the issue around the burden such requirements would place on smaller businesses that are unable to afford the same level of resources as a company like Facebook - which currently employs roughly 30,000 content moderators.
Nicola Cain of London law firm RPC believes the legislation could introduce red tape and place added pressure on many UK businesses.
“Whilst the government seeks to defend this legislation by saying that less than five percent of all UK businesses will fall under the new legislation, in reality it will not just be social media platforms and content providers, but most businesses with a website, including retailers and ecommerce sites, that will be affected to some degree,” Cain said.
"Even for internet giants, that already have well-established procedures in place, these reforms will potentially remove crucial legal protections, requiring them to be far more pro-active in removing content and threatening freedom of expression."
Is this a workable policy?
Backlash from tech companies has already caused the government to delay its final proposal until spring, with the Internet Association (IA), which represents online firms including Google, Facebook and Twitter, outlining a string of major concerns relating to the policy. Others questioned the impact it could have on freedom of speech.
Acting executive director of human rights non-profit Article 19, Quinn McKew, said that while the government’s desire to protect children from harm is well intentioned, it must ensure free speech rights are not thrown out in the process.
“[The proposals] will almost inevitably result in the removal of legitimate expression. In the face of possible fines and personal prosecutions, companies will err on the side of caution and use algorithms to remove content at scale,” she said.
What is certain however is that Ofcom doesn’t currently have the capacity to regulate the internet in the way the government is proposing, and will need to undergo some major changes in order to become fit for its new purpose.
On Wednesday, the Financial Times reported that over the next 18-24 months that regulator will need to expand its headcount by at least a third, or roughly 300 people.
In 2017, Germany introduced the NetzDG Law, which states that social media platforms with more than 2 million registered German users can face fines of up to €50m (£42m) if they fail to review and remove illegal content within 24 hours of being posted.
In July 2019, Germany's Federal Office of Justice fined Facebook €2m (£1.7m) for underreporting the number of complaints it had received about illegal content on its platform. The social media platform was accused of skewing the extent of such violations by selectively reporting complaints.
This story, "UK government names Ofcom internet regulator" was originally published by Techworld.com.