The UK calls on search and social media companies to “tame toxic algorithms” that recommend harmful content to children, or risk billions in fines. On WednesdayUK media regulator Ofcom has outlined more than 40 proposed requirements for tech giants under Online Safety Act rules, including robust age checks and content moderation that aim to better protect minors online in line with future digital security laws.
“Our proposed codes place the responsibility for keeping children safer firmly on technology companies,” said Ofcom chief executive Melanie Dawes. “They will need to tame aggressive algorithms that send harmful content to children in their personalized feeds and introduce age checks so that children have an age-appropriate experience.”
Specifically, Ofcom wants to prevent children from encountering content relating to things like eating disorders, self-harm, suicide, pornography and any material considered violent, hateful or abusive. Platforms also need to protect children from online bullying and promotions for dangerous online challenges, and allow them to leave negative comments on content they don’t want to see, so they can better curate their feeds.
Bottom line: Platforms will soon have to block content deemed harmful in the UK, even if it means “preventing children from accessing the entire website or app”, says Ofcom.
The Online Safety Act allows Ofcom to impose fines of up to £18 million (about $22.4 million) or 10% of a company’s global revenue – whichever is greater. This means that large companies like Meta, Google and TikTok are at risk of paying substantial amounts. Ofcom warns that companies that fail to comply can “expect to face enforcement action”.
Companies have until July 17 to respond to Ofcom’s proposals before the codes are presented to parliament. The regulator is expected to release a final version in spring 2025, after which platforms will have three months to comply.