The UK’s internet safety bill, which aims to regulate the internet, has been revised to remove a controversial but important measure.

Matt Cardi | Getty Images News | Getty Images

LONDON — Social media platforms such as Facebook, TikTok and Twitter will no longer be required to remove “legal but harmful” content under proposed amendments to UK internet safety legislation.

On Monday, British lawmakers announced that the Internet Safety Bill, which aims to regulate the Internet, will be revised to reverse the controversial but critical measure.

The government said the amendment would help protect freedom of speech and give people more control over what they see online.

However, critics described the move as a “major weakening” of the bill, which threatens to undermine the accountability of technology companies.

Previous proposals have instructed tech giants to prevent people from viewing legal but harmful content such as self-harm, suicide and abusive online content.

Under the amendments, which the government called a “consumer-friendly triple shield”, the responsibility for choosing content would be shifted to internet users, and technology companies would have to put in place a system to allow people to filter out the harmful content they make. I don’t want to see.

Crucially, however, companies will still need to protect children and remove content that is illegal or prohibited by their terms of service.

“Empowerment of adults”, “preserving freedom of speech”

UK Culture Secretary Michelle Donnellan said the new plans would ensure that no “tech firms or future governments could use the laws as a license to censor legitimate views”.

“Today’s announcement refocuses the Internet Safety Bill on its original goals: the urgent need to protect children and fight online criminal activity, while preserving free speech, ensuring that technology firms are accountable to their users, and empowering adults to make more informed choices. choices about the platforms they use. of use,” the government said in a statement.

The opposition Labor Party said the amendment was a “major weakening” of the bill, but with the potential to fuel misinformation and conspiracy theories.

Replacing harm prevention with an emphasis on free speech undermines the very purpose of this bill.

Lucy Powell

the Labor Party’s shadow culture secretary

“Replacing harm prevention with a focus on free speech undermines the very purpose of this bill and emboldens abusers, COVID deniers, hoaxers who will feel encouraged to thrive online,” Shadow Culture Minister Lucy Powell said.

Meanwhile, suicide risk charity Samaritans said increased user controls should not replace tech companies’ responsibility.

“Strengthening the control that people have is no substitute for holding sites accountable through the law, and this is like the government prying defeat from under the jaws of victory,” said Julie Bentley, chief executive of Samaritans.

The devil is in the details

Monday’s announcement is the latest iteration of the UK’s sweeping online safety bill, which also includes guidance on identity verification tools and new offenses to tackle fraud and revenge porn.

This comes after months of campaigning by free speech advocates and internet advocacy groups. Meanwhile, Elon Musk’s acquisition of Twitter has brought internet content moderation back into the spotlight.

The proposals are now due to be returned to the British Parliament next week before becoming law by next summer.

However, commentators say further improvements to the bill are needed to ensure the loopholes are closed by now.

“The devil is in the details. There is a risk that Ofcom’s oversight of social media terms and conditions and ‘consistency’ requirements could encourage overzealous takedowns,” Matthew Lesh, head of public policy at the free market think tank. This was reported by the Institute of Economic Problems.

Communications and media regulator Ofcom will be responsible for most of the enforcement of the new law and will be able to fine companies up to 10% of their global revenue for non-compliance.

“There are other issues that the government has not addressed,” Lesh continued. “Requirements to remove content that firms ‘highly believe’ to be illegal set an extremely low threshold and threaten preemptive automated censorship.”

Source by [author_name]

Previous articleChinese competition Tesla Nio and Tencent will work on self-driving technology
Next articlethe myth of Western artificial intelligence and Musk vs. Apple