BRITAIN PLANS TO REGULATE SOCIAL MEDIA
Britain’s Government Announces Plans To Regulate Facebook, Twitter And TikTok
Topline: Britain’s government has announced new plans to regulate likes of Facebook, Twitter and TikTok and ensure that internet firms take responsibility for protecting their users—particularly young ones—from harmful or illegal content.
- Internet firms that allow user-generated content, including comments, will be held to account by Ofcom, the U.K.’s communications and broadcasting regulator, if they fail to tackle “online harms” such as terrorism and child sexual abuse.
- This means that internet firms will be expected to remove illegal content swiftly, and minimise the chances of it appearing in the first place.
- In addition to terror-related content and online child sex abuse, cyber-bullying, self-harm and suicide content will be targeted.
- Home Secretary Priti Patel said: “It is incumbent on tech firms to balance issues of privacy and technological advances with child protection.”
- Wednesday’s announcement is part of the U.K. government’s consultation on legislation for new, “world-leading” measures to hold internet giants accountable for the content posted by their users.
How will this be enforced? The watchdog will not be able to take down posts that are deemed harmful or illegal. But it will be empowered to hold tech firms to account if they fail to enforce content standards. The government, preempting concerns from adult users and the media, say Ofcom will be expected to safeguard free speech online, as well as the role of the press. Firms that fail to comply could face “substantial” fines, and executives could be penalised.
The government says fewer than 5% of U.K. businesses would be affected—but this accounts for as much as 300,000 of Britain’s 6 million private sector firms.
What happens next: Internet firms can lobby the government to try and stop more stringent measures being imposed on them, as the legislation is being finalised.
Chief critics (and advocates): The move has been welcomed by child safety charities NSPCC and Barnardos, which have both been campaigning for an independent regulator to check and fine social networks if they fail to protect children online. Ofcom says that 4 in 5 internet users aged between 12 and 15 have experienced harmful content online over the past 12 months.
- Meanwhile, Facebook founder Mark Zuckerberg has previously admitted he wants governments and regulators to monitor harmful content, as it is too big a responsibility for social media firms to shoulder.
- But business groups and think tanks say the government’s plans are “unrealistic.” Dom Hallas, executive director of Coadec, which represents startups, told CityAM: “Startups rely on an internet where users feel safe, but these plans simply won’t work.”
- He added: “It is a confusing minefield that can only benefit big companies with the resources and armies of lawyers to comply.”
Key background: Britain’s increased regulation is in line with its goal to crack down on tech firms’ slowness to address concerns about users’ mental health. The death of teenager Molly Russell, which was linked to social media use, as well as concerns about radicalization, have pushed the issue up the agenda. Internet regulation around the world varies widely. Last year, Facebook fell foul of Germany’s new NetzDG law, which requires social media companies with more than 2 million users to take down illegal material, review complaints and publish its record on this, or face a fine of up to $5.6 million. The EU last year also introduced stringent data protection legislation that regulates how firms store people’s data. The EU will also fine companies up to 4% of their turnover if they fail to remove extremist content after being asked to do so by authorities.
Further reading: History Tells Us Social Media Regulation Is Inevitable (Kalev Leetaru)