Why "censorship" is complex
High error rates during moderation enforcement often leads to user frustration and accusations of censorship. As Meta's Nick Clegg showed this week, these issues are not just technical but political and rarely understood outside T&S teams. That poses a problem.
'Error rates are too high', OpenAI's red teaming research and Nighat's story
The week in content moderation - edition #273
What the T&S community needs to hear (in your own words)
Many Trust and Safety professionals need wisdom and encouragement more than ever right now. The people who took part in my TrustCon zine project provided just that
Teen social media ban passed, Bluesky beefs up mod team and detecting algospeak
The week in content moderation - edition #272
Fighting complexity with auditability
Could a more transparent, collaborative, and adaptable policy enforcement model combat cries of 'censorship' and empower diverse online communities to self-moderate? Open-source policy documents and AI-driven auditability might hold the key
The prevalence of AI-generated sexual content, Meta's new safety model & Carr appointed
The week in content moderation - edition #272
How I’m talking to my kid’s school about phones and social media
Conversations about kids and digital safety are often clouded by moral panic and oversimplification. By exploring the trade-offs of digital devices and platforms, we can empower parents and schools to make decisions that prioritise both safety and connection for young people.
Safety concerns drive X departures, new policy tracker & Oversight Board announce CEO
The week in content moderation - edition #271
The four T&S horseman of the Trumpocalypse
For the last few years, we — as a society — have been getting to grips with how to balance safety, self-expression, and privacy online. With Trump’s election in the US and safety regulation in Australia and the UK, we’re finally getting some answers.