5 min read

Controversial Californian law decision, returning to the Reddit blackout and Darius the artist

The week in content moderation - edition #229

Hello and welcome back to Everything in Moderation, your guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you.

So what can we expect from 2024 when it comes to online speech? There will be new regulatory duties to contend with, yes, but how likely is a misinformation election scandal, European tech policy seeping into the US sphere or the first fledging posts of a certain former President on X?. We just don't know. But EiM is here, in your inbox and online, to chronicle what is likely to be a rollercoaster year.

A big new year hello to new subscribers from Ofcom, ActiveFence, Discord, NextDoor, Storyful, TrustLab and elsewhere. If today's newsletter was forward to you, you can sign up here.

Here's what I've been reading over the last few days; I hope it's a useful recap — BW


Today's edition is in partnership with Modulate, a prosocial voice technology company making online spaces safer and more inclusive

Modulate builds ToxMod, the proactive voice moderation platform that makes online spaces safer.

Get the latest in trust & safety and content moderation news impacting the games industry and beyond with Modulate’s newsletter, Trust & Safety Lately. What are the latest legal regulations impacting content moderation policies? What are the current community management best practices? Which organizations are leading the way in the trust & safety space?

Subscribe to the monthly Trust & Safety Lately newsletter for all this and more...


Policies

New and emerging internet policy and online speech regulation

A California law that requires social media companies to explain how they moderate content has been deemed not to be "unduly burdensome" following a complaint filed by X/Twitter in September (EiM #216) that argued made it "difficult to reliably define” what constitutes misinformation and hate speech. Federal judge William Shubb disagreed that AB587 infringed upon First Amendment law and threw out the request for a temporary injunction. X/Twitter is yet to appeal.

Further reading: AB587's plan to force platforms to regularly publish their "terms of service" has faced criticism for presuming that all internet users are acting in good faith and won't reverse engineer its policies to cause harm. Techdirt's Mike Masnick made the point that it sets a potentially tricky precedent outside of the west coast state:

"what’s to stop other states from requiring the same thing regarding how [social media] companies deal with other issues, like LGBTQ content."

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member