The UK government's new online censorship laws have been brought before parliament. The Government wrote in its press release:
The Online Safety Bill marks a milestone in the fight for a new digital age which is safer for users and
holds tech giants to account. It will protect children from harmful content such as pornography and limit people's exposure to illegal content, while protecting freedom of speech.
It will require social media platforms, search
engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions.
The regulator Ofcom will have the power to fine companies
failing to comply with the laws up to ten per cent of their annual global turnover, force them to improve their practices and block non-compliant sites.
Today the government is announcing that executives whose companies fail to
cooperate with Ofcom's information requests could now face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted.
A raft of other new offences have also been added
to the Bill to make in-scope companies' senior managers criminally liable for destroying evidence, failing to attend or providing false information in interviews with Ofcom, and for obstructing the regulator when it enters company offices.
In the UK, tech industries are blazing a trail in investment and innovation. The Bill is balanced and proportionate with exemptions for low-risk tech and non-tech businesses with an online presence. It aims to increase people's trust
in technology, which will in turn support our ambition for the UK to be the best place for tech firms to grow.
The Bill will strengthen people's rights to express themselves freely online and ensure social media companies are not
removing legal free speech. For the first time, users will have the right to appeal if they feel their post has been taken down unfairly.
It will also put requirements on social media firms to protect journalism and democratic
political debate on their platforms. News content will be completely exempt from any regulation under the Bill.
And, in a further boost to freedom of expression online, another major improvement announced today will mean social
media platforms will only be required to tackle 'legal but harmful' content, such as exposure to self-harm, harassment and eating disorders, set by the government and approved by Parliament.
Previously they would have had to
consider whether additional content on their sites met the definition of legal but harmful material. This change removes any incentives or pressure for platforms to over-remove legal content or controversial comments and will clear up the grey area
around what constitutes legal but harmful.
Ministers will also continue to consider how to ensure platforms do not remove content from recognised media outlets.
Bill introduction and changes over the
The Bill will be introduced in the Commons today. This is the first step in its passage through Parliament to become law and beginning a new era of accountability online. It follows a period in which the government
has significantly strengthened the Bill since it was first published in draft in May 2021. Changes since the draft Bill include:
Bringing paid-for scam adverts on social media and search engines into scope in a major move to combat online fraud .
Making sure all websites which publish or host pornography , including commercial
sites, put robust checks in place to ensure users are 18 years old or over.
Adding new measures to clamp down on anonymous trolls to give people more control over who can contact them and what they see online.
Making companies proactively tackle the most harmful illegal content and criminal activity quicker.
Criminalising cyberflashing through the Bill.
Criminal liability for senior managers
The Bill gives Ofcom powers to demand information and data from tech companies, including on the role of their algorithms in selecting and displaying content, so it
can assess how they are shielding users from harm.
Ofcom will be able to enter companies' premises to access data and equipment, request interviews with company employees and require companies to undergo an external assessment of
how they're keeping users safe.
The Bill was originally drafted with a power for senior managers of large online platforms to be held criminally liable for failing to ensure their company complies with Ofcom's information requests
in an accurate and timely manner.
In the draft Bill, this power was deferred and so could not be used by Ofcom for at least two years after it became law. The Bill introduced today reduces the period to two months to strengthen
penalties for wrongdoing from the outset.
Additional information-related offences have been added to the Bill to toughen the deterrent against companies and their senior managers providing false or incomplete information. They
will apply to every company in scope of the Online Safety Bill. They are:
offences for companies in scope and/or employees who suppress, destroy or alter information requested by Ofcom;
offences for failing to comply with, obstructing or delaying Ofcom when exercising its
powers of entry, audit and inspection, or providing false information;
offences for employees who fail to attend or provide false information at an interview.
Falling foul of these offences could lead to up to two years in imprisonment or a fine.
Ofcom must treat the information gathered from companies sensitively. For example, it will not be able to share or publish
data without consent unless tightly defined exemptions apply, and it will have a responsibility to ensure its powers are used proportionately.
Changes to requirements on 'legal but harmful' content
Under the draft Bill, 'Category 1' companies - the largest online platforms with the widest reach including the most popular social media platforms - must address content harmful to adults that falls below the threshold of a criminal offence.
Category 1 companies will have a duty to carry risk assessments on the types of legal harms against adults which could arise on their services. They will have to set out clearly in terms of service how they will deal with such
content and enforce these terms consistently. If companies intend to remove, limit or allow particular types of content they will have to say so.
The agreed categories of legal but harmful content will be set out in secondary
legislation and subject to approval by both Houses of Parliament. Social media platforms will only be required to act on the priority legal harms set out in that secondary legislation, meaning decisions on what types of content are harmful are not
delegated to private companies or at the whim of internet executives.
It will also remove the threat of social media firms being overzealous and removing legal content because it upsets or offends someone even if it is not
prohibited by their terms and conditions. This will end situations such as the incident last year when TalkRadio was forced offline by YouTube for an "unspecified" violation and it was not clear on how it breached its terms and conditions.
The move will help uphold freedom of expression and ensure people remain able to have challenging and controversial discussions online.
The DCMS Secretary of State has the power to add more categories of
priority legal but harmful content via secondary legislation should they emerge in the future. Companies will be required to report emerging harms to Ofcom.
Platforms may need to
use tools for content moderation, user profiling and behaviour identification to protect their users.
Additional provisions have been added to the Bill to allow Ofcom to set expectations for the use of these proactive technologies
in codes of practice and force companies to use better and more effective tools, should this be necessary.
Companies will need to demonstrate they are using the right tools to address harms, they are transparent, and any
technologies they develop meet standards of accuracy and effectiveness required by the regulator. Ofcom will not be able to recommend these tools are applied on private messaging or legal but harmful content.
A new requirement will mean companies must report child sexual exploitation and abuse content they detect on their platforms to the National Crime Agency .
The CSEA reporting requirement
will replace the UK's existing voluntary reporting regime and reflects the Government's commitment to tackling this horrific crime.
Reports to the National Crime Agency will need to meet a set of clear standards to ensure law
enforcement receives the high quality information it needs to safeguard children, pursue offenders and limit lifelong re-victimisation by preventing the ongoing recirculation of illegal content.
In-scope companies will need to
demonstrate existing reporting obligations outside of the UK to be exempt from this requirement, which will avoid duplication of company's efforts.