New legislation governing firms hosting user-generated content, with a view to preventing harms, could chill free speech on the internet, critics warn. A wealth management firm, meanwhile, wants the laws – when enacted – to be used to remove scams and fraudulent advertising from such sites.
New UK legislation designed to protect users of online services – already slammed as opening the way to censorship – must be used to remove financial scams from websites, a wealth management firm argues.
Yesterday, in the Queen’s Speech list of legislative proposals being steered through Parliament, the government said it would press ahead with the Online Safety Bill.
According to the government’s website, “The Bill introduces new rules for firms which host user-generated content, i.e. those which allow users to post their own content online or interact with each other, and for search engines, which will have tailored duties focussed on minimising the presentation of harmful search results to users. Those platforms which fail to protect people will need to answer to the regulator, and could face fines of up to ten per cent of their revenues or, in the most serious cases, being blocked.”
Civil liberties campaigners argue that the law, while designed to protect the public, creates the risk of censorship because the definition of "harm" in certain cases is ambiguous.
However, irrespective of such criticisms, Matt Burton, chief risk officer at Quilter, said the law should be used to strike out scams and other fraudulent material from the internet.
“For far too long the onus has been on diligent individuals and financial services providers to identify scam adverts and report them to search engines, the regulator and the police instead of the search engines undertaking basic due diligence to filter out fraudulent adverts in the first place. This bill is the perfect opportunity to require search engines and social media platforms to remove sham investment and impersonation scams promptly from their sites, and conduct the necessary due diligence to stop them from appearing,” he said.
“Reports suggest risk of a rebellion over the kinds of posts social media companies will be required to take action on, warning of censorship if there is not clarification. However, it is no exaggeration to say that customers of financial services have faced a fraud epidemic, with very few protections in place to stop harmful content from appearing online and that must be clamped down on sooner rather than later,” Burton said.
The government said: “All platforms in scope will need to tackle and remove illegal material online, particularly material relating to terrorism and child sexual exploitation and abuse.
“Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.
“Freedom of expression will be protected because these laws are
not about imposing excessive regulation or state removal of
content, but ensuring that companies have the systems and
processes in place to ensure users’ safety. Proportionate
measures will avoid unnecessary burdens on small and low-risk
businesses,” it said.