6 min read

The prevalence of AI-generated sexual content, Meta's new safety model & Carr appointed

The week in content moderation - edition #272

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

In this week's stories, you should be able to see how the tools and policies meant to keep us safe online are being questioned, reshaped, and, in some cases, outright challenged. Whether it's Snap's research or Brendan Carr's vision, it's clear that the way Trust & Safety teams have done things isn't holding.

A hearty welcome to new subscribers from ActiveFence, RunwayML, Google, Patreon, Technomom and Fujitsu. This is where you’ll get your weekly roundup and, on Monday, you'll also receive T&S Insider from Alice Hunsberger (former Grindr and OkCupid, now PartnerHero). Check out her latest piece on smartphones in schools in today's edition.

That's the intros out of the way; here's everything in moderation from the last week — BW


Today’s edition is in partnership with the Tech Coalition, funding new research on AI generated child sexual abuse

The Tech Coalition is funding three new research projects on the misuse of generative AI for child sexual exploitation and abuse. These projects, announced at an event in Brussels yesterday, will help protect children in a rapidly evolving digital landscape.


Policies

New and emerging internet policy and online speech regulation

Donald Trump might might not have a said a lot about the Kids Online Safety Act (KOSA) but the state attorneys general are saying it for him. This week, a coalition of more than thirty AGs urged Congress to pass the much-debated bill before the current session ends, as pressure mounts for platforms to do more to safeguard children. Despite bipartisan support in the Senate, KOSA currently faces opposition in the House due to concerns over potential content censorship and, you guessed it, the extent of enforcement powers being granted to state attorneys general. 

The Oversight Board has overturned Meta's removal of three Facebook posts featuring footage from the March 2024 Moscow terrorist attack, directing that the content be reinstated with "Mark as Disturbing" warning screens. The Board noted that, despite violating Meta's terror attack policy, the posts' high public interest value and their role in condemning the attack warranted protection under the newsworthiness allowance, especially in Russia, where media isn’t exactly free and fair. 

The mood music: The Russian government has put pressure on social media companies to take down content going back to 2021 and required media outlets to cover the Ukraine war only using official government sources. I’m not sure this will do anything to address Russian attitudes towards the US or its platforms.

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member