The changing nature of verification, Brazil to investigate Musk and labelling AI content
The week in content moderation - edition #243
Don't fall into the T&S ROI trap
Sometimes we have to invest in Trust & Safety because it’s the right thing to do, not because there will be a return on the investment. Here are some suggestions for alternatives to traditional ROI calculations.
'Rinse and repeat' policies, YouTube's Indian election issue and regulatory thresholds
The week in content moderation - edition #242
Content policy is basically astrology? (part two)
Large language models (LLMs) allow policy rules to be enforced more consistently, and wouldn't allow for exceptions. But history — and my experience at Grindr — shows us that that is rarely how the world works.
Meta’s most banned word, ad targeting vs moderation and new civility research
The week in content moderation - edition #241
Content policy is basically astrology? (part one)
Moderating with large language AI models could open up new ways of thinking about content policy, moderation, and even the kinds of platforms that are possible. But there may be downsides too.
The risks of internet regulation, Oz puts six platforms on notice and J. Nathan Matias on the up
The week in content moderation - edition #240
How Trust & Safety teams can do more with less
Trust & Safety leaders across the industry are being asked to do more with less. As a result, the role of teams and leadership strategy is changing.
Regulating Chinese platforms, T&S software market prediction and Roth joins Match Group
The week in content moderation - edition #239
Trust & Safety is how platforms put values into action
By being more explicit about what values are important (and why), platforms can make it easier for users to decide if it's the place for them