Online Safety Act: Everything We Know So Far

After much discussion and debate, the UK Online Safety Act has finally become law.

Add bookmark

Trust & Safety

The aim of the 'online safety act' legislation is to protect children and adult users online by introducing enforceable regulations that require tech companies to make their services safer, with a focus on tackling illegal or harmful content.

The Act applies to any website or platform where users can share information or upload content that other users can access, and also search engines. While the Online Safety Act is applicable to big tech services such as YouTube, Google, X (formerly Twitter), Facebook and WhatsApp, it also extends to smaller services and platforms with fewer users or online presence.

You might be wondering about the potential impact of the Online Safety Act on your business or website. In this article, we breakdown everything we know so far about the Act, including how it could affect businesses of all kinds and what steps you can take to comply with new regulations.

What is the Online Safety Act?

The Online Safety Act was first proposed in 2017, with the first draft reaching the eyes of parliament in 2021. After two years of debate and refinement, it is now law after it received the Royal Assent on 26 October 2023.

This extensive piece of legislation addresses the complicated and broad application of implementing online safety measures to any user-to-user service or search engine within the UK. This means that any website or online platform where users can share or upload data, information, or content to be viewed by others is caught by the Act. Examples include social media, image sharing websites, video sharing websites, pornographic websites, gaming services, instant messaging services, as well as search engines such as Google.

The Online Safety Act sets out to minimise the risks on these spaces for misuse and exposure to any kind of content and is harmful or illegal.

While the Online Safety Act is now law, Ofcom, the formal regulator for online safety, is currently consulting on draft codes for illegal harms, which covers the range of measures in areas like content moderation, complaints, user access, design features to support users, and the governance and management of online safety risks. Finalisation of the draft codes will take place over three phases, with the first phase expecting to come into force in Q4 2024.

How will the Online Safety Act keep users safe?

Instead of relying on watchdogs or other entities to monitor website content, organisations are required to ensure that harmful or illegal material is not present on their website. The emphasis is on organisations to demonstrate that they have effective processes and safeguards in place to protect their users and remove any content that has been flagged as inappropriate or illegal. Examples of content that need to be considered or removed include:

  • Child sexual abuse
  • Terrorism
  • Animal cruelty
  • Promoting self harm
  • Promoting suicide
  • Selling illegal drugs or weapons
  • Extreme sexual violence
  • Cyber-flashing (sending unwanted nude content)
  • Fraudulent paid-for advertising

Categories of content affected by the Online Safety Act

The Online Safety Act categories the content in a few different categories.

Illegal content: This form of content covers forms of content that were already illegal in the UK, such as child sexual abuse or content that promotes terrorism. Here, all service and platform providers must remove this form of content and also take steps to ensure it does not appear in the first place.

Lawful but harmful content: This form of content entails legal, but age inappropriate content that is unsuitable for children. For example, self-harm, pornography, or content that depicts extreme violence.

Categories of content exempt by the Online Safety Act

Several categories of content are not affected by the Online Safety Act. For instance:

  • Email
  • SMS (short messaging service)
  • MMS (multimedia messaging service
  • One to one aural communication
  • Internal business services, such as business intranets and management software
  • Limited functionality services such as comment sections

What companies will be affected by the Online Safety Act?

The Online Safety Act outlines specific requirements for organisations that provide services that enable users to encounter user-generated content. The Act also applies to websites and platforms that have a significant number of UK users, even if they are based outside of the UK.

The Online Safety Act outlines specific requirements for organisations that provide services that enable users to encounter user-generated content. The Act also applies to websites and platforms that have a significant number of UK users, even if they are based outside of the UK.

The Online Safety Act is aimed at:

  • Providers of users-to-user services: This includes organisations such as Facebook, where users can view content created by other users.
  • Providers of search services: This includes platforms such as Google that have a search engine and can be used to find content.
  • Providers of services that publish graphic content such as pornography.
  • Here, they are categorised as either category 1 or 2 to differentiate between levels of risk
  • Category 1: High level of risk aimed at large platforms with a significant number of users. These services and platforms will be subject to higher scrutiny.
  • Category 2: Lower level risk platforms and services with fewer users.

In the Ofcom draft codes, the regulator also indicates large service providers will face more obligations. A large service is defined as a service which has an average user base of 7 million or more per month in the UK.

In addition, different safety measures services may need to put in place depending on the type specific service offered and risk level:

  • user-to-user or search
  • the features of their service
  • the number of users the service has
  • the results of illegal content risk assessment

Will international service providers be affected by the Online Safety Act?

Although the UK Government created the Online Safety Act, it has effects on international companies that cater to a substantial UK user base. Seeing as many large service providers, such as WhatsApp, have a huge amount of UK users, it was considered necessary for the Online Safety Act to fulfil its aims.

The criteria for international service providers are as followers:

  • Services or platforms that have a significant number of users based in the UK
  • Services or platforms which target the UK market
  • Where there are reasonable grounds to believe there is a material risk of significant harm to UK

What does the Online Safety Act require from affected companies?

The recently enacted law includes more than 200 clauses that encompass a wide range of content that platforms and organisations are obligated to address in order to demonstrate a "duty of care" over what users, especially children, see. The new rules apply to large tech services, as well as smaller services.

Companies that are affected by the Online Safety Act will need to undertake several processes and establish safety procedures to prevent illegal content or harmful content to be accessed by children. This includes:

  • Assessing the risk of harm from illegal content
  • Assessing the particular risk of harm to children from harmful content (if children are likely to use the service)
  • Taking effective steps to manage and mitigate the risks identified these assessments
  • In the terms of service, clearly explain how you will protect users
  • Making it easy for users to report illegal content, and content harmful to children
  • Making it easy for users to complain, including when they think their post has been unfairly removed or account blocked
  • Considering the importance of protecting freedom of expression and the right to privacy when implementing safety measures

Assessment of the legislation remains uncertain as Ofcom has been assigned the responsibility of supervising the implementation of the Bill and ensuring compliance. Codes of practice are expected to be published to detail the nuances and implications.

What are the penalties of the Online Safety Act and how will it be regulated?

Ofcom has been granted the power to request information from organisations that fall under the new Online Safety Act to ensure compliance with regulations. Violations of these rules come with steep punishments, including fines up to 10% of global revenue or a maximum of £18 million. Furthermore, senior stakeholders of regulated companies may face imprisonment as a consequence.

Ofcom are not responsible for removing online content, or deciding if a certain piece of content should be removed or not. Their role is to make sure social media sites and other regulated online services have systems and processes in place to protect users.

What challenges do organisations face when considering the Online Safety Act?

The Online Safety Act is broad in its scope and places the onus on organisations to establish the right procedures. However, as the Ofcom codes of practice are still being consulted and drafted, several potential obstacles may arise regarding how service providers should prepare.

  • Content monitoring: Organisations may need to hire more staff or purchase monitoring technology to have greater control over the content posted on their platforms. As expected, this may be costly, especially for smaller companies.
  • To delete or not to delete: Companies may over police their platforms and remove ambiguous content that has the potential to violate guidelines. While this may prevent the slightest possibility of suffering a large fine, it can make users feel as though their freedom is being suppressed, running the risk of stifling customer engagement and negatively impacting brand perception.
  • Historical content: For platforms that have a significant amount of historical content, it might be quite a challenge to find and delete anything that violates the Online Safety Act. Once again, this may prove to be a costly and time-consuming process.
  • Privacy: The Act could mandate messaging services to scrutinise the contents of encrypted messages. Platforms such as WhatsApp could be affected, and this could potentially undermine the privacy guarantees services have already established for users.
  • User vs service provider responsibility: A significant concern regarding the Online Safety Act is that it punishes the organisations that permit harmful content to be displayed, rather than the individuals responsible for creating the content. This is justified by the government as those platforms enable the content to be consumed more widely than it would have without them.
  • While this is largely true for social media, the Online Safety Act also affects platforms of all shapes and sizes - even those without significant reach - raising questions over how companies with limited resources can adequately abide by the laws set forth.
  • What does “legal but harmful” actually mean?: Defining what constitutes harmful content for children can be ambiguous, leading to inconsistencies in identifying such content. The criteria for this classification can differ among companies and even users.

How should organisations prepare for the Online Safety Act?

Due to the broad scope of the Online Safety Act, any organisation that allows users to post content will be subject to its terms and may face penalties. It's important to take practical measures and maintain due diligence to avoid any penalties.

  • Plan ahead: Organisations that are affected by the Online Safety Act should start assessing their user content to ensure that they align with the processes outlined from the bill
  • Review your current operations: As required by the Act, carry out a risk assessment of your websites and platforms and look out for potential issues that will conflict with the Online Safety Act, such as the inability of users to report harmful content
  • Consider the balance between user freedom and monitoring content: This will vary from organisation to organisation, but a balancing act will need to be reached to ensure that platforms are adequately monitored and safe for use, while not harming the user experience too severely.
  • Find out what category service you are: Category 1 service providers will need to comply with more stringent obligations. In addition, different obligations will be required depending on the number of users using the service or the content types published. Guidance for this can be found in the Ofcom draft codes.
  • Stay up to date and review the codes of practice: Ofcom will soon publish more guidance about how it aims to make sure regulated services take the steps outlined in the Act. While subject to change, they published the first set of draft codes of practice on 9 November 2023.

Consider practical plans for putting the following in place as stipulated by the Online Safety Act:

○ Conduct a risk assessment regarding the likelihood of harmful content being accessed by children

○ Put age restrictions being put in place for lawful but harmful content

○ Develop a system to enable users to report illegal or harmful content

○ Take proportionate steps to prevent your users from encountering illegal content

○ Remove illegal content when you become aware of it, and minimise the time it is present on your service

Preparing for the Online Safety Act: Why You Should Attend the Trust & Safety Summit UK

While the first phase of the Online Safety Act is not expected to come into force until late 2024, it's crucial to take action now. The broad scope of this legislation means it practically applies to all organisations with an online presence where users can post content. To stay ahead of the game, consider registering for our upcoming event, the Trust & Safety Summit UK.

As the Online Safety Act and Digital Services Act come into play, ensuring user safety and maintaining trust is more important than ever. Our summit is designed to equip industry professionals with actionable insights, strategies, and tools to navigate this rapidly evolving landscape and tackle emerging regulations head-on.

Join us for engaging discussions with a range of online safety experts on topics such as:

  • Regulatory Preparedness
  • Advanced Moderation
  • Restoring Trust
  • Age Verification & Assurance
  • Child Safety & Protection
  • Operational Trust & Safety
  • Modern Online Harms
  • Trust & Safety At Scale


RECOMMENDED