The UK’s Online Safety Act has come into force, presenting digital service providers with a complex landscape of compliance requirements and potential security risks.
Raconteur reports that Ofcom, the UK’s communications regulator, has begun enforcing a significant portion of the Online Safety Act. This legislation requires digital service providers operating in the UK to conduct risk assessments and prevent children from accessing any content deemed “harmful” or adult in nature on their platforms. Non-compliance with Ofcom’s standards could result in substantial fines of up to £18 million or 10 percent of global turnover, whichever is higher. In the most severe cases, executives or managers may face personal liability.
The Online Safety Act, which took years to pass through parliament, has been met with controversy. Supporters argue that it will enhance online safety for children, while critics have expressed concern over its broad scope, which targets not only illegal and pornographic content but also so-called priority offenses such as “foreign interference” and psychoactive substances.
Since March 2025, Ofcom has been exercising its enforcement powers against online services hosting illegal content. In April, the regulator finalized its Children’s Codes, which require digital services to modify their algorithms to prioritize harm reduction. As of this week, age checks have become mandatory on any platform that may host adult or harmful content, regardless of whether it is their primary purpose.
For digital service providers, achieving compliance has proven to be a complex task. While firms must meet the regulator’s requirements to avoid penalties, they are also seeking to protect themselves from potential overreach, but are increasingly unsure of how to do so.
The compliance burden varies depending on the size and category of the digital business, according to Jonathan Wright, a partner at law firm Hunton Andrews Kurth. Large platforms classified as “category one” services are subject to stringent reporting requirements and must provide users with greater control over the content they see and engage with. Smaller service providers are exempt from these obligations but must still conduct risk assessments, implement proportionate safety measures, and establish processes to address user complaints and take down content.
Kevin Quirk, company director at AI Bridge Solutions, an AI advisory and development practice, notes that preparing for the Online Safety Act is both complicated and costly. Digital platforms accessible in the UK must implement additional layers of moderation, conduct risk assessments, and ensure their services are auditable. Some may even need to create a designated safety officer role for UK users.
Quirk emphasizes that the cost of compliance has been particularly burdensome for his startup clients, causing delays in deployment timelines and requiring modifications to align with safety-by-design principles. He suggests that the UK must provide further assistance to startups to avoid signaling that the country is no longer a favorable environment for innovation in AI and web platform design.
The use of digital ID companies for age verification has raised concerns among privacy-conscious consumers. In the days following the enforcement of ID checks, popular privacy tech provider Proton reported an 1,800 percent increase in daily sign-ups for its VPNs, which allow users to bypass location-based checks. Another VPN provider, Nord, reported a 1,000 percent increase in purchases of its services. Breitbart News previously reported that the government may try to ban VPNs due to their sudden popularity.
For those who do not use VPNs, websites and social media companies may become unwilling custodians of sensitive data through their handling of digital identities. Jason Nurse, a cyber expert at the University of Kent, commends the act’s focus on protecting children and vulnerable individuals but expresses concern about the use of digital ID services for age checks on adult content. He warns that centralized databases of personally identifiable information create attractive targets for attackers seeking to exploit the data for malicious purposes.
Add to this the fact that these checks are easily bypassed. The advanced photo mode in the video game Death Stranding 2 has proven capable of manipulating the facial expressions of the main character, Sam Bridges, with enough realism to trick facial recognition tools used for age verification.
Gamers have found that by pointing their phone at a screen displaying Death Stranding 2, they can adjust Bridges’ facial expressions to match the instructions provided by the age verification systems. This method has successfully bypassed the verification processes on multiple platforms, as demonstrated by multiple journalists and users on Reddit.
The ease with which these age checks can be bypassed has raised questions about the effectiveness of the Online Safety Act and the age verification methods employed by social media platforms. While some platforms, such as leftist echo chamber Bluesky, which uses Yoti for verification, appear to be immune to this particular workaround, the overall security of these systems remains a concern.
Read more at Raconteur here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
Read the full article here