Content policy is basically astrology? (part two)
Large language models (LLMs) allow policy rules to be enforced more consistently, and wouldn't allow for exceptions. But history — and my experience at Grindr — shows us that that is rarely how the world works.
Meta’s most banned word, ad targeting vs moderation and new civility research
The week in content moderation - edition #241
Content policy is basically astrology? (part one)
Moderating with large language AI models could open up new ways of thinking about content policy, moderation, and even the kinds of platforms that are possible. But there may be downsides too.
The risks of internet regulation, Oz puts six platforms on notice and J. Nathan Matias on the up
The week in content moderation - edition #240
How Trust & Safety teams can do more with less
Trust & Safety leaders across the industry are being asked to do more with less. As a result, the role of teams and leadership strategy is changing.
Regulating Chinese platforms, T&S software market prediction and Roth joins Match Group
The week in content moderation - edition #239
Trust & Safety is how platforms put values into action
By being more explicit about what values are important (and why), platforms can make it easier for users to decide if it's the place for them
Investigating algorithmic moderation, Turkey turns on platforms and art vs moderation
The week in content moderation - edition #238
What Trust & Safety research could do better
A new paper looks at why there's a disconnect between peer-reviewed scholarship and on-the-ground T&S practice
How to audit platform algorithms, Supreme Court round-up and TikTok reshuffle
The week in content moderation - edition #237