Reaction to Meta's T&S about-turn, safety tech aquisition and Discord bot builders
The week in content moderation - edition #276
I read the State of Safety Tech report so you don’t have to (but you should)
An important, and growing, part of the Trust & Safety landscape is the support and expertise provided by technology services companies (aka vendors). A new report sheds more light on these companies
Regulators unveil new rules, Anthropic's 'bottom-up' safety tool and Bluesky ban reckoning
The week in content moderation - edition #275
What the T&S community predicts for 2025
Over the last 12 months, T&S leaders and practitioners have seen unprecedented change in an industry that many have worked in for years and many others are brand new to. I asked a few of them what their predictions were for 2025
Why "censorship" is complex
High error rates during moderation enforcement often leads to user frustration and accusations of censorship. As Meta's Nick Clegg showed this week, these issues are not just technical but political and rarely understood outside T&S teams. That poses a problem.
'Error rates are too high', OpenAI's red teaming research and Nighat's story
The week in content moderation - edition #274
What the T&S community needs to hear (in your own words)
Many Trust and Safety professionals need wisdom and encouragement more than ever right now. The people who took part in my TrustCon zine project provided just that
Teen social media ban passed, Bluesky beefs up mod team and detecting algospeak
The week in content moderation - edition #273
Fighting complexity with auditability
Could a more transparent, collaborative, and adaptable policy enforcement model combat cries of 'censorship' and empower diverse online communities to self-moderate? Open-source policy documents and AI-driven auditability might hold the key
The prevalence of AI-generated sexual content, Meta's new safety model & Carr appointed
The week in content moderation - edition #272