uk online safety bill, coming next year, will propose fines of up to 10% of annual turnover for breaching duty of care rules

The United Kingdom is progressing with a widely discussed, yet contentious, initiative to oversee a broad spectrum of online content deemed unlawful or detrimental, particularly when it presents potential dangers to young people. The government has published its conclusive response to the consultation initiated in April 2019, pledging to introduce an Online Safety Bill in the coming year.
“Technology companies will be required to significantly enhance their efforts to shield children from exposure to damaging content or activities, including grooming, cyberbullying, and pornography. This measure aims to ensure that future generations can fully benefit from the internet while benefiting from improved safeguards that minimize the potential for harm,” the government stated today.
Previously, in a preliminary response to the consultation regarding its Online Harms white paper, ministers designated the U.K.’s media regulatory body, Ofcom, as the entity responsible for enforcing the forthcoming regulations.
As outlined in the plans announced today, Ofcom will be authorized to impose penalties of up to 10% of a company’s total annual global revenue (or £18 million, whichever amount is greater) on those found to have neglected their duty of care in preventing access to illegal materials—such as child sexual abuse material, terrorist content, or content promoting suicide.
Ofcom will also possess the authority to restrict access to services that do not comply with the regulations within the U.K.—although the specific methods for achieving this remain undefined (and it is uncertain whether the legislation will address the use of VPNs by U.K. residents to circumvent blocked internet services).
The operational expenses of the regulator will be covered by companies subject to the law, provided their global annual revenue exceeds a certain threshold, according to the government. However, the specific revenue level triggering this obligation (and the financial contributions expected from major technology companies and others) has not yet been determined.
These online safety “duty of care” regulations are intended to encompass not only major social media platforms like Facebook but also a diverse range of internet services—including dating applications, search engines, online marketplaces, video-sharing platforms, instant messaging applications, consumer cloud storage, and even video games that facilitate user interaction.
Peer-to-peer services, online forums, and pornography websites will also be subject to these laws, as will messaging services offering limited privacy, as stated in a government press release.
This raises concerns about whether the legal obligations could incentivize companies to avoid employing end-to-end encryption (given that they could face penalties for failing to adequately monitor encrypted content for illegal materials).
“The new regulations will apply to any company globally that hosts user-generated content online accessible to individuals in the U.K. or enables them to interact with others online, either privately or publicly,” the government explains in a press release.
The rules will categorize responsibilities for content and activity, with the highest tier (category 1) applying to companies with “the largest online reach and the highest-risk features,” which the government anticipates will include Facebook, TikTok, Instagram, and Twitter.
“These companies will be required to assess the risk of legal content or activity on their services that presents ‘a reasonably foreseeable risk of causing significant physical or psychological harm to adults.’ They will then need to clearly define what type of ‘legal but harmful’ content is permissible on their platforms within their terms and conditions and enforce these terms consistently and transparently,” the government stated.
Category 1 companies will also be legally obligated to publish transparency reports detailing the measures they are taking to address online harms, as per the government’s announcement.
All companies covered by the legislation will be required to establish mechanisms for users to easily report harmful content or activity and to appeal content removal decisions.
The government estimates that fewer than 3% of U.K. businesses will be affected by the legislation, adding that “the vast majority” will be classified as Category 2 services.
Safeguards for freedom of expression are also planned, with the government stating that the laws will not impact articles and comment sections on news websites.
The legislation will include provisions for criminal sanctions against senior managers (introduced through parliamentary secondary legislation). The government added that it will not hesitate to utilize these powers if companies fail to take the new rules seriously, such as by not responding “fully, accurately, and in a timely manner” to information requests from Ofcom.
In a statement, digital secretary Oliver Dowden commented: “I am wholeheartedly supportive of technology, but that cannot equate to a completely unregulated online environment. Today, Britain is establishing a global benchmark for online safety with the most comprehensive approach to online regulation yet. We are entering a new era of accountability for the tech industry to protect children and vulnerable users, restore trust in the industry, and legally enshrine safeguards for free speech.”
“This proportionate new framework will ensure we do not impose unnecessary burdens on small businesses but provide large digital businesses with clear rules to follow so we can harness the benefits of modern technology to improve our lives,” he added.
Home secretary Priti Patel added in a supporting statement: “Tech companies must prioritize public safety or face the consequences.”
Ofcom CEO Dame Melanie Dawes also welcomed the expanded oversight remit, stating: “Being online offers significant benefits, but four in five people have concerns about it. This highlights the need for sensible, balanced rules that protect users from serious harm while also recognizing the positive aspects of the online world, including freedom of expression. We are preparing for this task by acquiring new technology and data skills and will collaborate with Parliament as it finalizes the plans.”
The government has announced that it will publish Interim Codes of Practice today to provide guidance for companies on addressing terrorist activity and online child sexual exploitation before the legislation is enacted—which is not expected to occur before late 2021 at the earliest, to allow sufficient time for parliamentary debate and review.
While a strong political drive to “protect children” online is likely to garner substantial public support, the broad application of the duty of care rules the government envisions—with a significant portion of the U.K.’s tech sector potentially affected—means ministers can anticipate considerable criticism from business groups, entrepreneurs, investors, and legal and policy experts, including concerns about the potential impact on privacy and security.
The government’s decision to proceed with an Online Safety Bill that will affect numerous smaller digital businesses, rather than focusing solely on the few platform giants responsible for generating a large volume of harms, has already drawn criticism from the tech sector.
Coadec, a digital policy group advocating for startups and the U.K. tech sector, characterized the plan as “a confusing minefield” for entrepreneurs, arguing that it will hinder digital competition and potentially counteract other recently announced government measures addressing concerns about market concentration in the digital advertising sector.
“Last week, the Government announced a new unit within the CMA [Competition and Markets Authority] to promote greater competition within digital markets. Days later, they have announced regulatory measures that risk having the opposite effect,” said Dom Hallas, Coadec’s executive director in a statement. “86% of U.K. investors say that regulation aimed at tackling big tech could lead to negative outcomes that damage tech startups and limit competition—these plans risk being a confusing minefield that will disproportionately impact competitors and benefit large companies with the resources to comply.”
“British startups want a safer internet. But it’s not clear how these proposals, which still cover a huge range of services that are nowhere near social media—from ecommerce to the sharing economy—are better targeted than the last time the government published proposals nearly a year and a half ago,” he added. “Until the Government starts to work collaboratively instead of consistently threatening startup founders with jail time, it’s not clear how we’re going to deliver proposals that work.”
One omission in the government’s proposal is financial harms—with issues such as fraud and the sale of unsafe goods explicitly excluded from the framework, as the government intends for the regulations to be “clear and manageable” for businesses and to avoid duplicating existing rules.
Some “lower-risk” services may also be exempt from the duty of care requirement, according to the government, to prevent the law from being overly burdensome.
Email services will also not be included, it confirmed.
While some types of advertising will be covered (such as influencer ads on social media), ads placed on an in-scope service through a direct contract between an advertiser and an advertising service (such as Facebook or Google Ads) will be exempt because “this is covered by existing regulation”—which may allow the adtech duopoly to avoid accountability for harmful ads without sufficient justification.
After all, existing U.K. regulations have not effectively curbed the proliferation of cryptocurrency scam ads on Facebook (or served through Google’s ad tools) in recent years—leading a consumer advice personality to call on Facebook and other companies to address the issue.
Consumer group Which? has criticized the government’s lack of attention to financial scams in the Online Safety Bill. In a response statement, Rocio Concha, its director of policy and advocacy, said: “It’s positive that the government is recognizing the responsibility of online platforms to protect users, but it would be a significant oversight if online scams were not addressed through the upcoming bill. Our research has shown the financial and emotional toll of scams and that social media firms such as Facebook and search engines like Google need to do much more to protect users.”
“We look forward to the details and hope to see a clear plan to give online platforms greater responsibility for fraudulent content on their sites, including having in place better controls to prevent fake adverts from appearing, so that all users can be confident that they will truly be safe online.”
European Union lawmakers are scheduled to unveil their own pan-EU policy package to regulate illegal and harmful content later today—but the Digital Services Act will also address the sale of illegal goods online and propose harmonizing rules for reporting problematic content on online services.